Tag Archives: chrome

Skillz: editing a web page

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/skillz-editing-web-page.html

So one of the skillz you ought to have in cybersec is messing with web-pages client-side using Chrome’s Developer Tools. Web-servers give you a bunch of HTML and JavaScript code which, once it reaches your browser, is yours to change and play with. You can do a lot with web-sites that they don’t intend by changing that code.

Let me give you an example. It’s only an example — touching briefly on steps to give you an impression what’s going on. It’s not a ground up explanation of everything, which you may find off-putting. Click on the images to expand them so you can see fully what’s going on.

Today is the American holiday called “Presidents Day”. It’s actually not a federal holiday, but a holiday in all 50 states. Originally it was just Washington’s birthday (February 22), but some states choose to honor other presidents as well, hence “Presidents Day”.
Thus of us who donated to Donald Trump’s campaign (note: I donated to all candidates campaigns back in 2015) received an email today suggesting that to honor Presidents Day, we should “sign a card” for Trump. It’s a gross dis-honoring of the Presidents the day is supposed to commemorate, but whatever, it’s the 21st century.
Okay, let’s say we want to honor the current President with a bunch of 🖕🖕🖕🖕 in order to point out his crassness of exploiting this holiday, and clicked on the URL [*], and filled it in as such (with multiple skin tones for the middle finger, just so he knows its from all of us):
Okay, now we hit the submit button “Add My Name” in order to send this to his campaign. The only problem is, the web page rejects us, telling us “Please enter a valid name” (note, I’m changing font sizes in these screen shots so you can see the message):
This is obviously client side validation of the field. It’s at this point that we go into Developer Tools in order to turn it off. One way is to [right-click] on that button, and from the popup menu, select “Inspect”, which gets you this screen (yes, the original page is squashed to the left-hand side):
We can edit the HTML right there and add the “novalidate” flag, as shown below, then hit the “Add My Name” button again:
This doesn’t work. The scripts on the webpage aren’t honoring the HTML5 “novalidate” flag. Therefore, we’ll have to go edit those scripts. We do that by clicking on the Sources tab, then press [ctrl-shift-f] to open the ‘find’ window in the sources, and type “Please enter a valid name”, and you’ll find the JavaScript source file (validation.js) where the validation function is located:
If at this point you find all these windows bewildering, then yes, you are on the right track. We typed in the search there near the bottom next to the classic search icon 🔍. Then right below that we got the search results. We clicked on the search results, then up above popped up the source file (validation.js) among all the possible source files with the line selected that contains our search term. Remember: when you pull down a single HTML page, like the one from donaldtrump.com, it can pull in a zillion JavaScript files as well.
Unlike the HTML, we can’t change the JavaScript on the fly (at least, I don’t know how to). Instead, we have to run more JavaScript. Specifically, we need to run a script that registers a new validation function. If you look in the original source, it contains a function that validates the input by making sure it matches a regular expression:
  1. jQuery.validator.addMethod(“isname”, function(value, element) {
  2.     return this.optional(element) || (/^[a-zA-Z]+[ ]+(([‘,. -][a-zA-Z ])?[a-zA-Z]*)+.?$/.test(value.trim()));
  3. }, “Please enter a valid name”);
From the console, we are going to call the addMethod function ourselves to register a different validation function for isname, specifically a validation function that always returns true, meaning the input is valid. This will override the previously registered function. As the Founders of our country say, the solution to bad JavaScript is not to censor it, but to add more JavaScript.
  1. jQuery.validator.addMethod(“isname”, function () {
  2.     return true});
We just type that in the Console as shown below (in the bottom window where Search used to be) and hit [enter]. It gives us the response “undefined”, but that’s OK. (Note: in the screenshot I misspelled it as isName, it should instead be all lowercase isname).
Now we can close Developer Tools and press the “Add My Name” button, and we get the following response:
Darn, foiled again. But at least this time, our request went to the server. It was on the server side that the request was rejected. We successfully turned off client-side checking. Had the server accepted our Unicode emoji, we would’ve reached the next step, where it asks for donations. (By the way, the entire purpose of “sign this card” is to get users to donate, nothing else).

Conclusion

So we didn’t actually succeed at doing anything here, but I thought I’d write it up anyway. Editing the web-page client-side, or mucking around with JavaScript client-side, is a skill that every cybersec professional should have. Hopefully, this is an amusing enough example that people will follow the steps to see how this is done.

Dear Obama, From Infosec

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/dear-obama-from-infosec.html

Dear President Obama:

We are more than willing to believe Russia was responsible for the hacked emails/records that influenced our election. We believe Russian hackers were involved. Even if these hackers weren’t under the direct command of Putin, we know he could put a stop to such hacking if he chose. It’s like harassment of journalists and diplomats. Putin encourages a culture of thuggery that attacks opposition, without his personal direction, but with his tacit approval.

Your lame attempts to convince us of what we already agree with has irretrievably damaged your message.

Instead of communicating with the America people, you worked through your typical system of propaganda, such as stories in the New York Times quoting unnamed “senior government officials”. We don’t want “unnamed” officials — we want named officials (namely you) who we can pin down and question. When you work through this system of official leaks, we believe you have something to hide, that the evidence won’t stand on its own.

We still don’t believe the CIA’s conclusions because we don’t know, precisely, what those conclusions are. Are they derived purely from companies like FireEye and CrowdStrike based on digital forensics? Or do you have spies in Russian hacker communities that give better information? This is such an important issue that it’s worth degrading sources of information in order to tell us, the American public, the truth.

You had the DHS and US-CERT issue the “GRIZZLY-STEPPE”[*] report “attributing those compromises to Russian malicious cyber activity“. It does nothing of the sort. It’s full of garbage. It contains signatures of viruses that are publicly available, used by hackers around the world, not just Russia. It contains a long list of IP addresses from perfectly normal services, like Tor, Google, Dropbox, Yahoo, and so forth.

Yes, hackers use Yahoo for phishing and malvertising. It doesn’t mean every access of Yahoo is an “Indicator of Compromise”.

For example, I checked my web browser [chrome://net-internals/#dns] and found that last year on November 20th, it accessed two IP addresses that are on the Grizzley-Steppe list:

No, this doesn’t mean I’ve been hacked. It means I just had a normal interaction with Yahoo. It means the Grizzley-Steppe IoCs are garbage.

If your intent was to show technical information to experts to confirm Russia’s involvement, you’ve done the precise opposite. Grizzley-Steppe proves such enormous incompetence that we doubt all the technical details you might have. I mean, it’s possible that you classified the important details and de-classified the junk, but even then, that junk isn’t worth publishing. There’s no excuse for those Yahoo addresses to be in there, or the numerous other problems.

Among the consequences is that Washington Post story claiming Russians hacked into the Vermont power grid. What really happened is that somebody just checked their Yahoo email, thereby accessing one of the same IP addresses I did. How they get from the facts (one person accessed Yahoo email) to the story (Russians hacked power grid) is your responsibility. This misinformation is your fault.

You announced sanctions for the Russian hacking [*]. At the same time, you announced sanctions for Russian harassment of diplomatic staff. These two events are confused in the press, with most stories reporting you expelled 35 diplomats for hacking, when that appears not to be the case.

Your list of individuals/organizations is confusing. It makes sense to name the GRU, FSB, and their officers. But why name “ZorSecurity” but not sole proprietor “Alisa Esage Shevchenko”? It seems a minor target, and you give no information why it was selected. Conversely, you ignore the APT28/APT29 Dukes/CozyBear groups that feature so prominently in your official leaks. You also throw in a couple extra hackers, for finance hacks rather than election hacks. Again, this causes confusion in the press about exactly who you are sanctioning and why. It seems as slipshod as the DHS/US-CERT report.

Mr President, you’ve got two weeks left in office. Russia’s involvement is a huge issue, especially given President-Elect Trump’s pro-Russia stance. If you’ve got better information than this, I beg you to release it. As it stands now, all you’ve done is support Trump’s narrative, making this look like propaganda — and bad propaganda at that. Give us, the infosec/cybersec community, technical details we can look at, analyze, and confirm.

Regards,
Infosec

[$] GStreamer and the state of Linux desktop security

Post Syndicated from jake original http://lwn.net/Articles/708196/rss

Recently Chris
Evans
, an IT security expert currently working for Tesla, published a
series of blog posts about security vulnerabilities in the GStreamer
multimedia framework. A combination of the Chrome browser and GNOME-based
desktops creates a particularly
scary vulnerability
. Evans also made a provocative statement: that
vulnerabilities of this severity currently wouldn’t happen in
Windows 10. Is the state of security on the Linux desktop really that
bad — and what can be done about it?

Subscribers can click below for the full story from this week’s edition.

Amazon AppStream 2.0 – Stream Desktop Apps from AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-appstream-2-0-stream-desktop-apps-from-aws/

My colleague Gene Farrell wrote the guest post below to tell you how the original vision for Amazon AppStream evolved in the face of customer feedback.

Jeff;


At AWS, helping our customers solve problems and serve their customers with technology is our mission. It drives our thinking, and it’s at the center of how we innovate. Our customers use services from AWS to build next-generation mobile apps, create delightful web experiences, and even run their core IT workloads, all at global scale.

While we have seen tremendous innovation and transformation in mobile, web, and core IT, relatively little has changed with desktops and desktop applications. End users don’t yet enjoy freedom in where and how they work; IT is stuck with rigid and expensive systems to manage desktops, applications, and a myriad of devices; and securing company information is harder than ever. In many ways, the cloud seems to have bypassed this aspect of IT.

Our customers want to change that. They want the same benefits of flexibility, scale, security, performance, and cost for desktops and applications as they’re seeing with mobile, web, and core IT. A little over two years ago, we introduced Amazon WorkSpaces, a fully managed, secure cloud desktop service that provides a persistent desktop running on AWS. Today, I am excited to introduce you to Amazon AppStream 2.0, a fully managed, secure application streaming service for delivering your desktop apps to web browsers.

Customers have told us that they have many traditional desktop applications that need to work on multiple platforms. Maintaining these applications is complicated and expensive, and customers are looking for a better solution. With AppStream 2.0, you can provide instant access to desktop applications using a web browser on any device, by streaming them from AWS. You don’t need to rewrite your applications for the cloud, and you only need to maintain a single version. Your applications and data remain secure on AWS, and the application stream is encrypted end to end.

Looking back at the original AppStream
Before I get into more details about AppStream 2.0, it’s worth looking at the history of the original Amazon AppStream service. We launched AppStream in 2013 as an SDK-based service that customers could use to build streaming experiences for their desktop apps, and move these apps to the cloud. We believed that the SDK approach would enable customers to integrate application streaming into their products. We thought game developers and graphics ISVs would embrace this development model, but it turns out it was more work than we anticipated, and required significant engineering investment to get started. Those who did try it, found that the feature set did not meet their needs. For example, AppStream only offered a single instance type based on the g2.2xlarge EC2 instance. This limited the service to high-end applications where performance would justify the cost. However, the economics didn’t make sense for a large number of applications.

With AppStream, we set out to solve a significant customer problem, but failed to get the solution right. This is a risk that we are willing to take at Amazon. We want to move quickly, explore areas where we can help customers, but be prepared for failure. When we fail, we learn and iterate fast. In this case, we continued to hear from customers that they needed a better solution for desktop applications, so we went back to the drawing board. The result is AppStream 2.0.

Benefits of AppStream 2.0
AppStream 2.0 addresses many of the concerns we heard from customers who tried the original AppStream service. Here are a few of the benefits:

  • Run desktop applications securely on any device in an HTML5 web browser on Windows and Linux PCs, Macs, and Chromebooks.
  • Instant-on access to desktop applications from wherever users are. There are no delays, no large files to download, and no time-consuming installations. Users get a responsive, fluid experience that is just like running natively installed apps.
  • Simple end user interface so users can run in full screen mode, open multiple applications within a browser tab, and easily switch and interact between them. You can upload files to a session, access and edit them, and download them when you’re done. You can also print, listen to audio, and adjust bandwidth to optimize for your network conditions.
  • Secure applications and data that remain on AWS – only encrypted pixels are streamed to end users. Application streams and user input flow through a secure streaming gateway on AWS over HTTPS, making them firewall friendly. Applications can run inside your own virtual private cloud (VPC), and you can use Amazon VPC security features to control access. AppStream 2.0 supports identity federation, which allows your users to access their applications using their corporate credentials.
  • Fully managed service, so you don’t need to plan, deploy, manage, or upgrade any application streaming infrastructure. AppStream 2.0 manages the AWS resources required to host and run your applications, scales automatically, and provides access to your end users on demand.
  • Consistent, scalable performance on AWS, with access to compute capabilities not typically available on local devices. You can instantly scale locally and globally, and ensure that your users always get a low-latency experience.
  • Multiple streaming instance types to run your applications. You can use instance types from the General Purpose, Compute Optimized, and Memory Optimized instance families to optimize application performance and reduce your overall costs.
  • NICE DCV for high-performance streaming provides secure, high-performance access to applications. NICE DCV delivers a fluid interactive experience, and automatically adjusts to network conditions.

Pricing & availability
With AppStream 2.0, you pay only for the streaming instances that you use, and a small monthly fee per authorized user. The charge for streaming instances depends on the instance type that you select, and the maximum number of concurrent users that will access their applications.

A user fee is charged per unique authorized user accessing applications in a region in any given month.  The user fee covers the Microsoft RDS SAL license, and may be waived if you bring your own RDS CAL licenses via Microsoft’s license mobility program. AppStream 2.0 offers a Free Tier, which provides an admin experience for getting started. The Free Tier includes 40 hours per month, for up to two months. For more information, see this page.

AppStream 2.0 is available today in US East (N. Virginia), US West (Oregon), Europe (Ireland), and AP-Northeast (Tokyo) Regions. You can try the AppStream 2.0 end user experience for free today, with no setup required, by accessing sample applications already installed on AppStream 2.0 To access the Try It Now experience, log in with your AWS account and choose an app to get started.

To learn more about AppStream 2.0, visit the AppStream page.

Gene Farrell, Vice President, AWS Enterprise Applications & EC2 Windows

In the Works – Amazon EC2 Elastic GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-work-amazon-ec2-elastic-gpus/

I have written about the benefits of GPU-based computing in the past, most recently as part of the launch of the P2 instances with up to 16 GPUs. As I have noted in the past, GPUs offer incredible power and scale, along with the potential to simultaneously decrease your time-to-results and your overall compute costs.

Today I would like to tell you a little bit about a new GPU-based feature that we are working on.  You will soon have the ability to add graphics acceleration to existing EC2 instance types. When you use G2 or P2 instances, the instance size determines the number of GPUs. While this works well for many types of applications, we believe that many other applications are now ready to take advantage of a newer and more flexible model.

Amazon EC2 Elastic GPUs
The upcoming Amazon EC2 Elastic GPUs give you the best of both worlds. You can choose the EC2 instance type and size that works best for your application and then indicate that you want to use an Elastic GPU when you launch the instance, and take your pick of four different sizes:

Name GPU Memory
eg1.medium 1 GiB
eg1.large 2 GiB
eg1.xlarge 4 GiB
eg1.2xlarge 8 GiB

Today, you have the ability to set up freshly created EBS volumes when you launch new instances. You’ll be able to do something similar with Elastic GPUs, specifying the desired size during the launch process, with the option to stop, modify, and then start a running instance in order to make a change.

Starting with OpenGL
Our Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We’ll start out with Windows support for Open GL, and plan to add support for the Amazon Linux AMI and other versions of OpenGL after that. We are also giving consideration to support for other 3D APIs including DirectX and Vulkan (let us know if these would be of interest to you). We will include the Amazon-optimized OpenGL library in upcoming revisions to the existing Microsoft Windows AMI.

OpenGL is great for rendering, but how do you see what’s been rendered? Great question! One option is to use the NICE Desktop Cloud Visualization (acquired earlier this year — Amazon Web Services to Acquire NICE) to stream the rendered content to any HTML5-compatible browser or device. This includes recent versions of Firefox and Chrome, along with all sorts of phones and tablets.

I believe that this unique combination of hardware and software will be a great host for all sorts of 3D visualization and technical computing applications. Two of our customers have already shared some of their feedback with us.

Ray Milhem (VP of Enterprise Solutions & Cloud) at ANSYS told us:

ANSYS Enterprise Cloud delivers a virtual simulation data center, optimized for AWS. It delivers a rich interactive graphics experience critical to supporting the end-to-end engineering simulation processes that allow our customers to deliver innovative product designs. With Elastic GPU, ANSYS will be able to more easily deliver this experience right-sized to the price and performance needs of our customers. We are certifying ANSYS applications to run on Elastic GPU to enable our customers to innovate more efficiently on the cloud.

Bob Haubrock (VP of NX Product Management) at Siemens PLM also had some nice things to say:

Elastic GPU is a game-changer for Computer Aided Design (CAD) in the cloud. With Elastic GPU, our customers can now run Siemens PLM NX on Amazon EC2 with professional-grade graphics, and take advantage of the flexibility, security, and global scale that AWS provides. Siemens PLM is excited to certify NX on the EC2 Elastic GPU platform to help our customers push the boundaries of Design & Engineering innovation.

New Certification Program
In order to help software vendors and developers to make sure that their applications take full advantage of  Elastic GPUs and our other GPU-based offerings, we are launching the AWS Graphics Certification Program today. This program offers credits and tools that will help to quickly and automatically test applications across the supported matrix of instance and GPU types.

Stay Tuned
As always, I will share additional information just as soon as it becomes available!

Jeff;

New – Web Access for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-web-access-for-amazon-workspaces/

We launched WorkSpaces in late 2013 (Amazon WorkSpaces – Desktop Computing in the Cloud) and have been adding new features at a rapid clip. Here are some highlights from 2016:

Today we are adding to this list with the addition of Amazon WorkSpaces Web Access. You can now access your WorkSpace from recent versions of Chrome or Firefox running on Windows, Mac OS X, or Linux. You can now be productive on heavily restricted networks and in situations where installing a WorkSpaces client is not an option. You don’t have to download or install anything, and you can use this from a public computer without leaving any private or cached data behind.

To use Amazon WorkSpaces Web Access, simply visit the registration page using a supported browser and enter the registration code for your WorkSpace:

Then log in with your user name and password:

And here you go (yes, this is IE and Firefox running on WorkSpaces, displayed in Chrome):

This feature is available for all new WorkSpaces and you can access it at no additional charge after your administrator enables it:

Existing WorkSpaces must be rebuilt and custom images must be refreshed in order to take advantage of Web Access.

Jeff;

 

New – GPU-Powered Amazon Graphics WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-amazon-graphics-workspaces/

As you can probably tell from my I Love My Amazon WorkSpace post I am kind of a fan-boy!

Since writing that post I have found out that I am not alone, and that there are many other WorkSpaces fan-boys and fan-girls out there. Many AWS customers are enjoying their fully managed, secure desktop computing environments almost as much as I am. From their perspective as users, they like to be able to access their WorkSpace from a multitude of supported devices including Windows and Mac computers, PCoIP Zero Clients, Chromebooks, iPads, Fire tablets, and Android tablets. As administrators, they appreciate the ability to deploy high-quality cloud desktops for any number of users. And, finally, as business leaders they like the ability to pay hourly or monthly for the WorkSpaces that they launch.

New Graphics Bundle
These fans already have access to several different hardware choices: the Value, Standard, and Performance bundles. With 1 or 2 vCPUs (virtual CPUs) and 2 to 7.5 GiB of memory, these bundles are a good fit for many office productivity use cases.

Today we are expanding the WorkSpaces family by adding a new GPU-powered Graphics bundle. This bundle offers a high-end virtual desktop that is a great fit for 3D application developers, 3D modelers, and engineers that use CAD, CAM, or CAE tools at the office. Here are the specs:

  • Display – NVIDIA GPU with 1,536 CUDA cores and 4 GiB of graphics memory.
  • Processing – 8 vCPUs.
  • Memory – 15 GiB.
  • System volume – 100 GB.
  • User volume – 100 GB.

This new bundle is available in all regions where WorkSpaces currently operates, and can be used with any of the devices that I mentioned above. You can run the license-included operating system (Windows Server 2008 with Windows 7 Desktop Experience), or you can bring your own licenses for Windows 7 or 10. Applications that make use of OpenGL 4.x, DirectX, CUDA, OpenCL, and the NVIDIA GRID SDK will be able to take advantage of the GPU.

As you start to think about your petabyte-scale data analysis and visualization, keep in mind that these instances are located just light-feet away from EC2, RDS, Amazon Redshift, S3, and Kinesis. You can do your compute-intensive analysis server-side, and then render it in a visually compelling way on an adjacent WorkSpace. I am highly confident that you can use this combination of AWS services to create compelling applications that would simply not be cost-effective or achievable in any other way.

There is one important difference between the Graphics Bundle and the other bundles. Due to the way that the underlying hardware operates, WorkSpaces that run this bundle do not save the local state (running applications and open documents) when used in conjunction with the AutoStop running mode that I described in my Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume post. We recommend saving open documents and closing applications before disconnecting from your WorkSpace or stepping away from it for an extended period of time.

Demo
I don’t build 3D applications or use CAD, CAM, or CAE tools. However, I do like to design and build cool things with LEGO® bricks! I fired up the latest version of LEGO Digital Designer (LDD) and spent some time enhancing a design. Although I was not equipped to do any benchmarks, the GPU-enhanced version definitely ran more quickly and produced a higher quality finished product. Here’s a little design study I’ve been working on:

With my design all set up it was time to start building. Instead of trying to re-position my monitor so that it would be visible from my building table, I simply logged in to my Graphics WorkSpace from my Fire tablet. I was able to scale and rotate my design very quickly, even though I had very modest local computing power. Here’s what I saw on my Fire:

As you can see, the two screens (desktop and Fire) look identical! I stepped over to my building table and was able to set things up so that I could see my design and find my bricks:

Pricing
Graphics WorkSpaces are available with an hourly billing option. You pay a small, fixed monthly fee to cover infrastructure costs and storage, and an hourly rate for each hour that the WorkSpace is used during the month. Prices start at $22/month + $1.75 per hour in the US East (Northern Virginia) Region; see the WorkSpaces Pricing page for more information.

Jeff

 

Pirate Bay Risks “Repeat Offender” Ban From Google

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-risks-repeat-offender-ban-google-161111/

warning5Google regularly checks websites for malicious and harmful content to help people avoid running into dangerous situations.

This safe browsing service is used by modern browsers such as Chrome, Firefox, and Safari, which throw up a warning before people attempt to visit risky sites.

Frequent users of The Pirate Bay are familiar with these ominous warning signs. The site has been flagged several times over the past few years and twice in recent weeks.

This issue is more common on pirate sites as these only have access to lower-tier advertising agencies, some of which have minimal screening procedures for ads.

Thus far the browser roadblocks have always disappeared after the rogue advertisements have gone away, but according to Google, the red flag can become more permanent in the future.

The company has announced that it has implemented a “repeat offender” policy to address sites that frequently run into these problems. This is to prevent sites from circumventing the security measures by turning malicious content off and on.

“Over time, we’ve observed that a small number of websites will cease harming users for long enough to have the warnings removed, and will then revert to harmful activity,” Google’s Safe Browsing Team writes.

“Safe Browsing will begin to classify these types of sites as ‘Repeat Offenders’,” the announcement adds.

Chrome’s Pirate Bay block

chromeharmtpb

The new policy will only affect sites that link to harmful content. So-called ‘hacked’ sites, which Google also warns about, are not part of these measures.

Under these new rules, The Pirate Bay is also at risk of being benched for 30 days if it’s caught more than once in a short period of time. The same applies to all other sites on the Internet of course.

TorrentFreak asked Google what the timeframe is for sites to get a repeat offender classification, but the company hasn’t yet replied.

The Pirate Bay team isn’t really concerned about the new policy. They stress that in their case, the issue lies with third-party advertisers which they have no control over.

“Tell Google to get an ad blocker?” TPB’s Spud17 notes.

“Seriously though, there aren’t a lot of ad agencies willing to work with sharing sites. The ones we have access to aren’t very concerned with what they put up, and don’t exactly give us a preview of what their clients send them before they air it.”

The TPB team doesn’t see their site as a repeat offender. However, for the ad agencies there’s a lot at stake so perhaps this measure will trigger them to be more vigilant.

“It’s infrequent enough, I don’t believe TPB will be flagged as a Repeat Offender. Ultimately, that will cost the ad agencies dearly if all their clients were permanently denied visitors.

“So maybe in the long run those agencies with a tendency to serve malicious ads will better screen their clients,” Spud17 adds.

Even if The Pirate Bay or other pirate sites get banned for thirty days, it’s not the end of the world. People can easily disable the malware checking option in their browser to regain direct access. That is, if they are willing to take the risk.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

Inktober

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/23/inktober/

Inktober is an ancient and hallowed art tradition, dating all the way back to sometime, when it was started by someone. The idea is simple: draw something in ink every day. Real ink. You know. On paper.

I tried this last year. I quit after four days. Probably because I tried to do it without pencil sketches, and I’m really not very good at drawing things correctly the first time. I’d hoped that forcing myself to do it would spark some improvement, but all it really produced was half a week of frustration and bad artwork.

This year, I was convinced to try again without unnecessarily handicapping myself, so I did that. Three weeks and more than forty ink drawings later, here are some thoughts.

Some background

I’ve been drawing seriously since the beginning of 2015. I spent the first few months working primarily in pencil, until I was gifted a hand-me-down tablet in March; almost everything has been digital since then.

I’ve been fairly lax about learning to use color effectively — I have enough trouble just producing a sketch I like, so I’ve mostly been trying to improve there. Doesn’t feel worth the effort to color a sketch I’m not really happy with, and by the time I’m really happy with it, I’m itching to draw something else. Whoops. Until I get quicker or find some mental workaround, monochrome ink is a good direction to try.

I have an ongoing “daily” pokémon series, so I’ve been continuing that in ink. (Everyone else seems to be using some list of single-word prompts, but I didn’t even know about that until after I’d started, so, whoops.)

I’ve got a few things I want to get better at:

  • Detailing, whatever that means. Part of the problem is that I’m not sure what it means. My art is fairly simple and cartoony, and I know it’s possible to be more detailed without doing realistic shading, but I don’t have a grasp of how to think about that.

  • Better edges, which mostly means line weight. I mentally categorize this as a form of scale, which also includes tips like “don’t let parallel lines get too close together” and “don’t draw one or two very small details”.

  • Better backgrounds and environments. Or, let’s be honest, any backgrounds and environments — I draw an awful lot of single characters floating in an empty white void. My fixed-size canvas presents an obvious and simple challenge: fill the page!

  • More interesting poses, and relatedly, getting a better hang of anatomy. I started drawing the pokémon series partly for this reason: a great many pokémon have really unusual shapes I’ve tried drawing before. Dealing with weird anatomy and trying to map it to my existing understanding should hopefully flex some visualization muscles.

  • Lighting, probably? I’m aware that things not facing a light source are in shadow, but my understanding doesn’t extend very far beyond that. How does light affect a large outdoor area? How can you represent the complexity of light and shadow with only a single pen? Art, especially cartoony art, has an entire vocabulary of subtle indicators of shadow and volume that I don’t know much about.

Let’s see what exactly I’ve learned.

Analog materials are very different

I’ve drawn plenty of pencil sketches on paper, and I’ve done a few watercolors, but I’ve never done this volume of “serious” art on paper before.

All my inks so far are in a 3.5” × 5” sketchbook. I’ll run out of pages in a few days, at which point I’ll finish up the month in a bigger sketchbook. It’s been a mixed blessing: I have less page to fill, but details are smaller and more fiddly, so mistakes are more obvious. I also don’t have much room for error with the composition.

I started out drawing with a small black Faber–Castell “PITT artist pen”. Around day five, I borrowed C3 and C7 (light and dark cool greys) Copic sketch markers from Mel; later I got a C5 as well. A few days ago I bought a Lamy Safari fountain pen with Noodler’s Heart of Darkness ink.

Both the FC pen and the fountain pen are ultimately still pens, but they have some interesting differences in edge cases. Used very lightly at an extreme angle, the FC pen produces very scratchy-looking lines… sometimes. Sometimes it does nothing instead, and you must precariously tilt the pen until you find the magical angle, hoping you don’t suddenly get a solid line where you didn’t want it. The Lamy has been much more consistent: it’s a little more willing to draw thinner lines than it’s intended for, and it hasn’t created any unpleasant surprises. The Lamy feels much smoother overall, like it flows, which is appropriate since that’s how fountain pens work.

Markers are interesting. The last “serious” art I did on paper was watercolor, which is pretty fun — I can water a color down however much I want, and if I’m lucky and fast, I can push color around on the paper a bit before it dries. Markers, ah, not so much. Copics are supposed to be blendable, but I’ve yet to figure out how to make that happen. It might be that my sketchbook’s paper is too thin, but the ink seems to dry within seconds, too fast for me to switch markers and do much of anything. For the same reason, I have to color an area by… “flood-filling”? I can’t let the edge of the colored area dry, or when I go back to extend that edge, I’ll be putting down a second layer of ink and create an obvious dark band. I’ve learned to keep the edge wet as much as possible.

On the plus side, going over dry ink in the same color will darken it, and I’ve squeezed several different shades of gray out of just the light marker. The brush tip can be angled in several different ways to make different shapes; I’ve managed a grassy background and a fur texture just by holding the marker differently. Marker ink does bleed very slightly, but it tends to stop at pen ink, a feature I’ve wanted in digital art for at least a century. I can also kinda make strokes that fade out by moving the marker quickly and lifting it off the paper as I go; surely there are more clever things to be done here, but I’ve yet to figure them out.

The drawing of bergmite above was done as the light marker started to run dry, which is not a problem I was expecting. The marker still worked, but not very well. The strokes on the cave wall in the background aren’t a deliberate effect; those are the strokes the marker was making, and I tried to use them as best I could. I didn’t have the medium marker yet, and the dark marker is very dark — almost black. I’d already started laying down marker, so I couldn’t very well finish the picture with just the pen, and I had to improvise.

Ink is permanent

Well. Obviously.

I have to be pretty careful about what I draw, which creates a bit of a conflict. If I make smooth, confident strokes, I’m likely to fuck them up, and I can’t undo and try again. If I make a lot of short strokes, I get those tell-tale amateurish scratchy lines. If I trace my sketch very carefully and my hand isn’t perfectly steady, the resulting line will be visibly shaky.

I probably exacerbated the shaky lines with my choice of relatively small paper; there’s no buffer between those tiny wobbles and the smallest level of detail in the drawing itself. I can’t always even see where my tiny sketch is going, because my big fat fingers are in the way.

I’ve also had the problem that my sketch is such a mess that I can’t tell where a line is supposed to be going… until I’ve drawn it and it’s obviously wrong. Again, small paper exacerbates this by compressing sketches.

Since I can’t fix mistakes, I’ve had to be a little creative about papering over them.

  • I did one ink with very stark contrast: shadows were completely filled with ink, highlights were bare paper. No shading, hatching, or other middle ground. I’d been meaning to try the approach anyway, but I finally did it after making three or four glaring mistakes. In the final work, they’re all hidden in shadow, so you can’t really tell anything ever went wrong.

  • I’ve managed to disguise several mistakes of the “curved this line too early” variety just by adding some more parallel strokes and pretending I intended to hatch it all along.

  • One of the things I’ve been trying to figure out is varying line weight, and one way to vary it is to make edges thicker when in shadows. A clever hack has emerged here.

    You see, it’s much easier for me to draw an upwards arc than a downwards arc. (I think this is fairly universal?) I can of course just rotate the paper, but if I’m drawing a cylinder, it’s pretty obvious when the top was drawn with a slight bias in one direction and the bottom was drawn with a slight bias in the other direction.

    My lifehack is to draw the top and bottom with the paper oriented the same way, then gradually thicken the bottom, “carving” it into the right shape as I go. I can make a lot of small adjustments and still end up with a single smooth line that looks more or less deliberate.

  • As a last resort… leave it and hope no one notices. That’s what I did for the floatzel above, who has a big fat extra stroke across their lower stomach. It’s in one of the least interesting parts of the picture, though, so it doesn’t really stand out, even though it’s on one of the lightest surfaces.

Ink takes a while

Ink drawings feel like they’ve consumed my entire month. Sketching and then lining means drawing everything twice. Using physical ink means I have to nail the sketch — but I’m used to digital, where I can sketch sloppily and then fixing up lines as I go. I also can’t rearrange the sketch, move it around on the paper if I started in the wrong place, or even erase precisely, so I’ve had to be much more careful and thoughtful even with pencil. That’s a good thing — I don’t put nearly enough conscious thought into what I’m drawing — but it definitely takes longer. In a few thorny cases I’ve even resorted to doing a very loose digital sketch, then drawing the pencil sketch based off of that.

All told, each one takes maybe two hours, and I’ve been doing two at a time… but wait, that’s still only four hours, right? How are they taking most of a day?

I suspect a bunch of factors are costing me more time than expected. If I can’t think of a scene idea, I’ll dawdle on Twitter for a while. Two “serious” attempts in a medium I’m not used to can be a little draining and require a refractory period. Fragments of time between or around two larger tasks are, of course, lost forever. And I guess there’s that whole thing where I spent half the month waking up in the middle of the night for no reason and then being exhausted by late evening.

Occasionally I’ve experimented with some approach that turns out to be incredibly tedious and time-consuming, like the early Gardevoir above. You would not believe how long that damn grass took. Or maybe you would, if you’d ever tried similar. Even the much lazier tree-covered mountain in the background seemed to take a while. And this is on a fairly small canvas!

I’m feeling a bit exhausted with ink work at this point, which is not the best place to be after buying a bunch of ink supplies. I definitely want to do more of it in the future, but maybe not daily. I also miss being able to undo. Sweet, sweet undo.

Precision is difficult, and I am bad at planning

These turn out to be largely the same problem.

I’m not a particularly patient person, so I like to jump from the sketch into the inking as soon as possible. Sometimes this means I overlook some details. Here’s that whole “not consciously thinking enough” thing again. Consider, in the above image,

  • The two buildings at the top right are next to each other, yet the angles of their roofs suggest they’re facing in slightly different directions, which doesn’t make a lot of sense for artificial structures.

  • The path leading from the dock doesn’t quite make sense, and the general scale of the start of the dock versus the shrubs and trees is nonsense. The trees themselves are pretty cool, but it looks like I plopped them down individually without really having a full coherent plan going in. Which is exactly what happened.

    Imagining spaces in enough detail to draw them is tough, and not something I’ve really had to do much before. It’s ultimately the same problem I have with game level design, though, so hopefully a breakthrough in one will help me with the other.

  • Phantump’s left eye has a clear white edge showing the depth of the hole in the trunk, but the right eye’s edge was mostly lost to some errant strokes and subsequent attempts to fix them. Also, even the left margin is nowhere near as thick as the trunk’s bottom edge.

  • The crosshatched top of phantump’s head blends into the noisy grassy background. The fix for this is to leave a thin white edge around the top of the head. I think I intended to do this, then completely forgot about it as I was drawing the grass. I suppose I’m not used to reasoning about negative space; I can’t mark or indicate it in any way, nor erase the ink if I later realize I laid down too much.

  • The pupils don’t quite match, but I’d already carved them down a good bit. Negative space problem again. Highlights on dark areas have been a recurring problem all month, especially with markers.

I have no idea how people make beautifully precise inkwork. At the same time, I’ve long had the suspicion that I worry too much about precision and should be a lot looser. I’m missing something here, and I don’t know what it is.

What even is pokémon anatomy

This is a wigglytuff. Wigglytuffs are tall blobs with ears.

I had such a hard time sketching this. (Probably why I rushed the background.)

It turns out that if you draw a wigglytuff even slightly off, the result is a tall blob with ears rather than a wigglytuff. That makes no sense, especially given that wigglytuffs are balloons. Surely, the shape shouldn’t be such a strong part of the wigglytuff identity, and yet it is.

Maybe half of the pokémon I’ve drawn have had some anatomical surprise, even ones I thought I was familiar with. Aerodactyl and huntail have a really pronounced lower jaw. Palpitoad has no arms at all. Pelipper is 70% mouth. Zangoose seems like a straightforward mammal at first glance, but the legs and body and head are all kind of a single blob. Numerous pokémon have no distinct neck, or no distinct shoulders, or a very round abdomen with legs kind of arbitrarily attached somewhere.

Progress, maybe

I don’t know what precisely I’ve gotten out of this experience. I can’t measure artistic progress from one day to the next. I do feel like I’ve gleaned some things, but they seem to be very abstract things. I’m out of the total beginner weeds and solidly into the intermediate hell of just picking up hundreds of little things no one really talks about. All I can do is cross my fingers and push forwards.

The crowd favorite so far is this mega rayquaza, which is kinda funny to me because I don’t feel like I did anything special here. I just copied a bunch of fiddly details. It looks cool, but it felt more like rote work than a struggle to do a new thing.

My own favorite is this much simpler qwilfish. It’s the culmination of several attempts to draw water that I liked, and it came out the best by far. The highlight is also definitely the best I’ve drawn this month. Interesting how that works out.

The rest are on on Tumblr, or in this single Twitter thread.

Bringing the Viewer In: The Video Opportunity in Virtual Reality

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/151940036881

By Satender Saroha, Video Engineering

Virtual reality (VR) 360° videos are the next frontier of how we engage with and consume content. Unlike a traditional scenario in which a person views a screen in front of them, VR places the user inside an immersive experience. A viewer is “in” the story, and not on the sidelines as an observer.

Ivan Sutherland, widely regarded as the father of computer graphics, laid out the vision for virtual reality in his famous speech, “Ultimate Display” in 1965 [1]. In that he said, “You shouldn’t think of a computer screen as a way to display information, but rather as a window into a virtual world that could eventually look real, sound real, move real, interact real, and feel real.”

Over the years, significant advancements have been made to bring reality closer to that vision. With the advent of headgear capable of rendering 3D spatial audio and video, realistic sound and visuals can be virtually reproduced, delivering immersive experiences to consumers.

When it comes to entertainment and sports, streaming in VR has become the new 4K HEVC/UHD of 2016. This has been accelerated by the release of new camera capture hardware like GoPro and streaming capabilities such as 360° video streaming from Facebook and YouTube. Yahoo streams lots of engaging sports, finance, news, and entertainment video content to tens of millions of users. The opportunity to produce and stream such content in 360° VR opens a unique opportunity to Yahoo to offer new types of engagement, and bring the users a sense of depth and visceral presence.

While this is not an experience that is live in product, it is an area we are actively exploring. In this blog post, we take a look at what’s involved in building an end-to-end VR streaming workflow for both Live and Video on Demand (VOD). Our experiments and research goes from camera rig setup, to video stitching, to encoding, to the eventual rendering of videos on video players on desktop and VR headsets. We also discuss challenges yet to be solved and the opportunities they present in streaming VR.

1. The Workflow

Yahoo’s video platform has a workflow that is used internally to enable streaming to an audience of tens of millions with the click of a few buttons. During experimentation, we enhanced this same proven platform and set of APIs to build a complete 360°/VR experience. The diagram below shows the end-to-end workflow for streaming 360°/VR that we built on Yahoo’s video platform.

Figure 1: VR Streaming Workflow at Yahoo

1.1. Capturing 360° video

In order to capture a virtual reality video, you need access to a 360°-capable video camera. Such a camera uses either fish-eye lenses or has an array of wide-angle lenses to collectively cover a 360 (θ) by 180 (ϕ) sphere as shown below.

Though it sounds simple, there is a real challenge in capturing a scene in 3D 360° as most of the 360° video cameras offer only 2D 360° video capture.

In initial experiments, we tried capturing 3D video using two cameras side-by-side, for left and right eyes and arranging them in a spherical shape. However this required too many cameras – instead we use view interpolation in the stitching step to create virtual cameras.

Another important consideration with 360° video is the number of axes the camera is capturing video with. In traditional 360° video that is captured using only a single-axis (what we refer as horizontal video), a user can turn their head from left to right. But this setup of cameras does not support a user tilting their head at 90°.

To achieve true 3D in our setup, we went with 6-12 GoPro cameras having 120° field of view (FOV) arranged in a ring, and an additional camera each on top and bottom, with each one outputting 2.7K at 30 FPS.

1.2. Stitching 360° video

Projection Layouts

Because a 360° view is a spherical video, the surface of this sphere needs to be projected onto a planar surface in 2D so that video encoders can process it. There are two popular layouts:

Equirectangular layout: This is the most widely-used format in computer graphics to represent spherical surfaces in a rectangular form with an aspect ratio of 2:1. This format has redundant information at the poles which means some pixels are over-represented, introducing distortions at the poles compared to the equator (as can be seen in the equirectangular mapping of the sphere below).

Figure 2: Equirectangular Layout [2]

CubeMap layout: CubeMap layout is a format that has also been used in computer graphics. It contains six individual 2D textures that map to six sides of a cube. The figure below is a typical cubemap representation. In a cubemap layout, the sphere is projected onto six faces and the images are folded out into a 2D image, so pieces of a video frame map to different parts of a cube, which leads to extremely efficient compact packing. Cubemap layouts require about 25% fewer pixels compared to equirectangular layouts.

Figure 3: CubeMap Layout [3]

Stitching Videos

In our setup, we experimented with a couple of stitching softwares. One was from Vahana VR [4], and the other was a modified version of the open-source Surround360 technology that works with a GoPro rig [5]. Both softwares output equirectangular panoramas for the left and the right eye. Here are the steps involved in stitching together a 360° image:

Raw frame image processing: Converts uncompressed raw video data to RGB, which involves several steps starting from black-level adjustment, to applying Demosaic algorithms in order to figure out RGB color parts for each pixel based on the surrounding pixels. This also involves gamma correction, color correction, and anti vignetting (undoing the reduction in brightness on the image periphery). Finally, this stage applies sharpening and noise-reduction algorithms to enhance the image and suppress the noise.

Calibration: During the calibration step, stitching software takes steps to avoid vertical parallax while stitching overlapping portions in adjacent cameras in the rig. The purpose is to align everything in the scene, so that both eyes see every point at the same vertical coordinate. This step essentially matches the key points in images among adjacent camera pairs. It uses computer vision algorithms for feature detection like Binary Robust Invariant Scalable Keypoints (BRISK) [6] and AKAZE [7].

Optical Flow: During stitching, to cover the gaps between adjacent real cameras and provide interpolated view, optical flow is used to create virtual cameras. The optical flow algorithm finds the pattern of apparent motion of image objects between two consecutive frames caused by the movement of the object or camera. It uses OpenCV algorithms to find the optical flow [8].

Below are the frames produced by the GoPro camera rig:

Figure 4: Individual frames from 12-camera rig

Figure 5: Stitched frame output with PtGui

Figure 6: Stitched frame with barrel distortion using Surround360

Figure 7: Stitched frame after removing barrel distortion using Surround360

To get the full depth in stereo, the rig is set-up so that i = r * sin(FOV/2 – 360/n). where:

  • i = IPD/2 where IPD is the inter-pupillary distance between eyes.\
  • r = Radius of the rig.
  • FOV = Field of view of GoPro cameras, 120 degrees.
  • n = Number of cameras which is 12 in our setup.

Given IPD is normally 6.4 cms, i should be greater than 3.2 cm. This implies that with a 12-camera setup, the radius of the the rig comes to 14 cm(s). Usually, if there are more cameras it is easier to avoid black stripes.

Reducing Bandwidth – FOV-based adaptive transcoding

For a truly immersive experience, users expect 4K (3840 x 2160) quality resolution at 60 frames per second (FPS) or higher. Given typical HMDs have a FOV of 120 degrees, a full 360° video needs a resolution of at least 12K (11520 x 6480). 4K streaming needs a bandwidth of 25 Mbps [9]. So for 12K resolution, this effectively translates to > 75 Mbps and even more for higher framerates. However, average wifi in US has bandwidth of 15 Mbps [10].

One way to address the bandwidth issue is by reducing the resolution of areas that are out of the field of view. Spatial sub-sampling is used during transcoding to produce multiple viewport-specific streams. Each viewport-specific stream has high resolution in a given viewport and low resolution in the rest of the sphere.

On the player side, we can modify traditional adaptive streaming logic to take into account field of view. Depending on the video, if the user moves his head around a lot, it could result in multiple buffer fetches and could result in rebuffering. Ideally, this will work best in videos where the excessive motion happens in one field of view at a time and does not span across multiple fields of view at the same time. This work is still in an experimental stage.

The default output format from stitching software of both Surround360 and Vahana VR is equirectangular format. In order to reduce the size further, we pass it through a cubemap filter transform integrated into ffmpeg to get an additional pixel reduction of ~25%  [11] [12].

At the end of above steps, the stitching pipeline produces high-resolution stereo 3D panoramas which are then ingested into the existing Yahoo Video transcoding pipeline to produce multiple bit-rates HLS streams.

1.3. Adding a stitching step to the encoding pipeline

Live – In order to prepare for multi-bitrate streaming over the Internet, a live 360° video-stitched stream in RTMP is ingested into Yahoo’s video platform. A live Elemental encoder was used to re-encode and package the live input into multiple bit-rates for adaptive streaming on any device (iOS, Android, Browser, Windows, Mac, etc.)

Video on Demand – The existing Yahoo video transcoding pipeline was used to package multiple bit-rates HLS streams from raw equirectangular mp4 source videos.

1.4. Rendering 360° video into the player

The spherical video stream is delivered to the Yahoo player in multiple bit rates. As a user changes their viewing angle, different portion of the frame are shown, presenting a 360° immersive experience. There are two types of VR players currently supported at Yahoo:

WebVR based Javascript Player – The Web community has been very active in enabling VR experiences natively without plugins from within browsers. The W3C has a Javascript proposal [13], which describes support for accessing virtual reality (VR) devices, including sensors and head-mounted displays on the Web. VR Display is the main starting point for all the device APIs supported. Some of the key interfaces and attributes exposed are:

  • VR Display Capabilities: It has attributes to indicate position support, orientation support, and has external display.
  • VR Layer: Contains the HTML5 canvas element which is presented by VR Display when its submit frame is called. It also contains attributes defining the left bound and right bound textures within source canvas for presenting to an eye.
  • VREye Parameters: Has information required to correctly render a scene for given eye. For each eye, it has offset the distance from middle of the user’s eyes to the center point of one eye which is half of the interpupillary distance (IPD). In addition, it maintains the current FOV of the eye, and the recommended renderWidth and render Height of each eye viewport.
  • Get VR Displays: Returns a list of VR Display(s) HMDs accessible to the browser.

We implemented a subset of webvr spec in the Yahoo player (not in production yet) that lets you watch monoscopic and stereoscopic 3D video on supported web browsers (Chrome, Firefox, Samsung), including Oculus Gear VR-enabled phones. The Yahoo player takes the equirectangular video and maps its individual frames on the Canvas javascript element. It uses the webGL and Three.JS libraries to do computations for detecting the orientation and extracting the corresponding frames to display.

For web devices which support only monoscopic rendering like desktop browsers without HMD, it creates a single Perspective Camera object specifying the FOV and aspect ratio. As the device’s requestAnimationFrame is called it renders the new frames. As part of rendering the frame, it first calculates the projection matrix for FOV and sets the X (user’s right), Y (Up), Z (behind the user) coordinates of the camera position.

For devices that support stereoscopic rendering like mobile phones from Samsung Gear, the webvr player creates two PerspectiveCamera objects, one for the left eye and one for the right eye. Each Perspective camera queries the VR device capabilities to get the eye parameters like FOV, renderWidth and render Height every time a frame needs to be rendered at the native refresh rate of HMD. The key difference between stereoscopic and monoscopic is the perceived sense of depth that the user experiences, as the video frames separated by an offset are rendered by separate canvas elements to each individual eye.

Cardboard VR – Google provides a VR sdk for both iOS and Android [14]. This simplifies common VR tasks like-lens distortion correction, spatial audio, head tracking, and stereoscopic side-by-side rendering. For iOS, we integrated Cardboard VR functionality into our Yahoo Video SDK, so that users can watch stereoscopic 3D videos on iOS using Google Cardboard.

2. Results

With all the pieces in place, and experimentation done, we were able to successfully do a 360° live streaming of an internal company-wide event.

Figure 8: 360° Live streaming of Yahoo internal event

In addition to demonstrating our live streaming capabilities, we are also experimenting with showing 360° VOD videos produced with a GoPro-based camera rig. Here is a screenshot of one of the 360° videos being played in the Yahoo player.

Figure 9: Yahoo Studios produced 360° VOD content in the Yahoo Player

3. Challenges and Opportunities

3.1. Enormous amounts of data

As we alluded to in the video processing section of this post, delivering 4K resolution videos for each eye for each FOV at a high frame-rate remains a challenge. While FOV-adaptive streaming does reduce the size by providing high resolution streams separately for each FOV, providing an impeccable 60 FPS or more viewing experience still requires a lot more data than the current internet pipes can handle. Some of the other possible options which we are closely paying attention to are:

Compression efficiency with HEVC and VP9 – New codecs like HEVC and VP9 have the potential to provide significant compression gains. HEVC open source codecs like x265 have shown a 40% compression performance gain compared to the currently ubiquitous H.264/AVC codec. LIkewise, a VP9 codec from Google has shown similar 40% compression performance gains. The key challenge is the hardware decoding support and the browser support. But with Apple and Microsoft very much behind HEVC and Firefox and Chrome already supporting VP9, we believe most browsers would support HEVC or VP9 within a year.

Using 10 bit color depth vs 8 bit color depth – Traditional monitors support 8 bpc (bits per channel) for displaying images. Given each pixel has 3 channels (RGB), 8 bpc maps to 256x256x256 color/luminosity combinations to represent 16 million colors. With 10 bit color depth, you have the potential to represent even more colors. But the biggest stated advantage of using 10 bit color depth is with respect to compression during encoding even if the source only uses 8 bits per channel. Both x264 and x265 codecs support 10 bit color depth, with ffmpeg already supporting encoding at 10 bit color depth.

3.2. Six degrees of freedom

With current camera rig workflows, users viewing the streams through HMD are able to achieve three degrees of Freedom (DoF) i.e., the ability to move up/down, clockwise/anti-clockwise, and swivel. But you still can’t get a different perspective when you move inside it i.e., move forward/backward. Until now, this true six DoF immersive VR experience has only been possible in CG VR games. In video streaming, LightField technology-based video cameras produced by Lytro are the first ones to capture light field volume data from all directions [15]. But Lightfield-based videos require an order of magnitude more data than traditional fixed FOV, fixed IPD, fixed lense camera rigs like GoPro. As bandwidth problems get resolved via better compressions and better networks, achieving true immersion should be possible.

4. Conclusion

VR streaming is an emerging medium and with the addition of 360° VR playback capability, Yahoo’s video platform provides us a great starting point to explore the opportunities in video with regard to virtual reality. As we continue to work to delight our users by showing immersive video content, we remain focused on optimizing the rendering of high-quality 4K content in our players. We’re looking at building FOV-based adaptive streaming capabilities and better compression during delivery. These capabilities, and the enhancement of our webvr player to play on more HMDs like HTC Vive and Oculus Rift, will set us on track to offer streaming capabilities across the entire spectrum. At the same time, we are keeping a close watch on advancements in supporting spatial audio experiences, as well as advancements in the ability to stream volumetric lightfield videos to achieve true six degrees of freedom, with the aim of realizing the true potential of VR.

Glossary – VR concepts:

VR – Virtual reality, commonly referred to as VR, is an immersive computer-simulated reality experience that places viewers inside an experience. It “transports” viewers from their physical reality into a closed virtual reality. VR usually requires a headset device that takes care of sights and sounds, while the most-involved experiences can include external motion tracking, and sensory inputs like touch and smell. For example, when you put on VR headgear you suddenly start feeling immersed in the sounds and sights of another universe, like the deck of the Star Trek Enterprise. Though you remain physically at your place, VR technology is designed to manipulate your senses in a manner that makes you truly feel as if you are on that ship, moving through the virtual environment and interacting with the crew.

360 degree video – A 360° video is created with a camera system that simultaneously records all 360 degrees of a scene. It is a flat equirectangular video projection that is morphed into a sphere for playback on a VR headset. A standard world map is an example of equirectangular projection, which maps the surface of the world (sphere) onto orthogonal coordinates.

Spatial Audio – Spatial audio gives the creator the ability to place sound around the user. Unlike traditional mono/stereo/surround audio, it responds to head rotation in sync with video. While listening to spatial audio content, the user receives a real-time binaural rendering of an audio stream [17].

FOV – A human can naturally see 170 degrees of viewable area (field of view). Most consumer grade head mounted displays HMD(s) like Oculus Rift and HTC Vive now display 90 degrees to 120 degrees.

Monoscopic video – A monoscopic video means that both eyes see a single flat image, or video file. A common camera setup involves six cameras filming six different fields of view. Stitching software is used to form a single equirectangular video. Max output resolution on 2D scopic videos on Gear VR is 3480×1920 at 30 frames per second.

Presence – Presence is a kind of immersion where the low-level systems of the brain are tricked to such an extent that they react just as they would to non-virtual stimuli.

Latency – It’s the time between when you move your head, and when you see physical updates on the screen. An acceptable latency is anywhere from 11 ms (for games) to 20 ms (for watching 360 vr videos).

Head Tracking – There are two forms:

  • Positional tracking – movements and related translations of your body, eg: sway side to side.
  • Traditional head tracking – left, right, up, down, roll like clock rotation.

References:

[1] Ultimate Display Speech as reminisced by Fred Brooks: http://www.roadtovr.com/fred-brooks-ivan-sutherlands-1965-ultimate-display-speech/

[2] Equirectangular Layout Image: https://www.flickr.com/photos/54144402@N03/10111691364/

[3] CubeMap Layout: http://learnopengl.com/img/advanced/cubemaps_skybox.png

[4] Vahana VR: http://www.video-stitch.com/

[5] Surround360 Stitching software: https://github.com/facebook/Surround360

[6] Computer Vision Algorithm BRISK: https://www.robots.ox.ac.uk/~vgg/rg/papers/brisk.pdf

[7] Computer Vision Algorithm AKAZE: http://docs.opencv.org/3.0-beta/doc/tutorials/features2d/akaze_matching/akaze_matching.html

[8] Optical Flow: http://docs.opencv.org/trunk/d7/d8b/tutorial_py_lucas_kanade.html

[9] 4K connection speeds: https://help.netflix.com/en/node/306

[10] Average connection speeds in US: https://www.akamai.com/us/en/about/news/press/2016-press/akamai-releases-fourth-quarter-2015-state-of-the-internet-report.jsp

[11] CubeMap transform filter for ffmpeg: https://github.com/facebook/transform

[12] FFMPEG software: https://ffmpeg.org/

[13] WebVR Spec: https://w3c.github.io/webvr/

[14] Google Daydream SDK: https://vr.google.com/cardboard/developers/

[15] Lytro LightField Volume for six DoF: https://www.lytro.com/press/releases/lytro-immerge-the-worlds-first-professional-light-field-solution-for-cinematic-vr

[16] 10 bit color depth: https://gist.github.com/l4n9th4n9/4459997

How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager

Post Syndicated from Lee Atkinson original https://aws.amazon.com/blogs/security/how-to-help-achieve-mobile-app-transport-security-compliance-by-using-amazon-cloudfront-and-aws-certificate-manager/

Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.”

In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

Background

Hypertext Transfer Protocol (HTTP) was proposed originally without the need for security measures such as server authentication and transport encryption. As HTTP evolved from covering simple document retrieval to sophisticated web applications and APIs, security concerns emerged. For example, if someone were able to spoof a website’s DNS name (perhaps by altering the DNS resolver’s configuration), they could direct users to another web server. Users would be unaware of this because the URL displayed by the browser would appear just as the user expected. If someone were able to gain access to network traffic between a client and server, that individual could eavesdrop on HTTP communication and either read or modify the content, without the client or server being aware of such malicious activities.

Hypertext Transfer Protocol Secure (HTTPS) was introduced as a secure version of HTTP. It uses either SSL or TLS protocols to create a secure channel through which HTTP communication can be transported. Using SSL/TLS, servers can be authenticated by using digital certificates. These certificates can be digitally signed by one of the certificate authorities (CA) trusted by the web client. Certificates can mitigate website spoofing and these can be later revoked by the CA, providing additional security. These revoked certificates are published by the authority on a certificate revocation list, or their status is made available via an online certificate status protocol (OCSP) responder. The SSL/TLS “handshake” that initiates the secure channel exchanges encryption keys in order to encrypt the data sent over it.

To avoid warnings from client applications regarding untrusted certificates, a CA that is trusted by the application must sign the certificates. The process of obtaining a certificate from a CA begins with generating a key pair and a certificate signing request. The certificate authority uses various methods in order to verify that the certificate requester is the owner of the domain for which the certificate is requested. Many authorities charge for verification and generation of the certificate.

Use ACM and CloudFront to deliver HTTPS websites and APIs

The process of requesting and paying for certificates, storing and transporting them securely, and repeating the process at renewal time can be a burden for website owners. ACM enables you to easily provision, manage, and deploy SSL/TLS certificates for use with AWS services, including CloudFront. ACM removes the time-consuming manual process of purchasing, uploading, and renewing certificates. With ACM, you can quickly request a certificate, deploy it on your CloudFront distributions, and let ACM handle certificate renewals. In addition to requesting SSL/TLS certificates provided by ACM, you can import certificates that you obtained outside of AWS.

CloudFront is a global content delivery network (CDN) service that accelerates the delivery of your websites, APIs, video content, and other web assets. CloudFront’s proportion of traffic delivered via HTTPS continues to increase as more customers use the secure protocol to deliver their websites and APIs.

CloudFront supports Apple’s ATS requirements for TLS 1.2, Perfect Forward Secrecy, server certificates with 2048-bit Rivest-Shamir-Adleman (RSA) keys, and a choice of ciphers. See more details in Supported Protocols and Ciphers.

The following diagram illustrates an architecture with ACM, a CloudFront distribution and its origins, and how they integrate to provide HTTPS access to end users and applications.

Solution architecture diagram

  1. ACM automates the creation and renewal of SSL/TLS certificates and deploys them to AWS resources such as CloudFront distributions and Elastic Load Balancing load balancers at your instruction.
  2. Users communicate with CloudFront over HTTPS. CloudFront terminates the SSL/TLS connection at the edge location.
  3. You can configure CloudFront to communicate to the origin over HTTP or HTTPS.

CloudFront enables easy HTTPS adoption. It provides a default *.cloudfront.net wildcard certificate and supports custom certificates, which can be either created by a third-party CA, or created and managed by ACM. ACM automates the process of generating and associating certificates with your CloudFront distribution for the first time and on each renewal. CloudFront supports the Server Name Indication (SNI) TLS extension (enabling efficient use of IP addresses when hosting multiple HTTPS websites) and dedicated-IP SSL/TLS (for older browsers and legacy clients that do no support SNI).

Keeping that background information in mind, I will now show you how you can generate a certificate with ACM and associate it with your CloudFront distribution.

Generate a certificate with ACM and associate it with your CloudFront distribution

In order to help deliver websites and APIs that are compliant with Apple’s ATS requirements, you can generate a certificate in ACM and associate it with your CloudFront distribution.

To generate a certificate with ACM and associate it with your CloudFront distribution:

  1. Go to the ACM console and click Get started.
    ACM "Get started" page
  2. On the next page, type the website’s domain name for your certificate. If applicable, you can enter multiple domains here so that the same certificate can be used for multiple websites. In my case, I type *.leeatk.com to create what is known as a wildcard certificate that can be used for any domain ending in .leeatk.com (that is a domain I own). Click Review and request.
    Request a certificate page
  3. Click Confirm and request. You must now validate that you own the domain. ACM sends an email with a verification link to the domain registrant, technical contact, and administrative contact registered in the Whois record for the domain. ACM also sends the verification link to email addresses commonly associated with an administrator of a domain: administrator, hostmaster, postmaster, and webmaster. ACM sends the same verification email to all these addresses in the expectation that at least one address is monitored by the domain owner. The link in any of the emails can be used to verify the domain.
    List of email addresses to which the email with verification link will be sent
  4. Until the certificate has been validated, the status of the certificate remains Pending validation. When I went through this approval process for *.leeatk.com, I received the verification email shown in the following screenshot. When you receive the verification email, click the link in the email to approve the request.
    Example verification email
  5. After you click I Approve on the landing page, you will then see a page that confirms that you have approved an SSL/TLS certificate for your domain name.
    SSL/TLS certificate confirmation page
  6. Return to the ACM console, and the certificate’s status should become Issued. You may need to refresh the webpage.
    ACM console showing the certificate has been issued
  7. Now that you have created your certificate, go to the CloudFront console and select the distribution with which you want to associate the certificate.
    Screenshot of associating the CloudFront distribution with which to associate the certificate
  8. Click Edit. Scroll down to SSL Certificate and select Custom SSL certificate. From the drop-down list, select the certificate provided by ACM. Select Only Clients that Support Server Name Indication (SNI). You could select All Clients if you want to support older clients that do not support SNI.
    Screenshot of choosing a custom SSL certificate
  9. Save the configuration by clicking Yes, Edit at the bottom of the page.
  10. Now, when you view the website in a browser (Firefox is shown in the following screenshot), you see a green padlock in the address bar, confirming that this page is secured with a certificate trusted by the browser.
    Screenshot showing green padlock in address bar

Configure CloudFront to redirect HTTP requests to HTTPS

We encourage you to use HTTPS to help make your websites and APIs more secure. Therefore, we recommend that you configure CloudFront to redirect HTTP requests to HTTPS.

To configure CloudFront to redirect HTTP requests to HTTPS:

  1. Go to the CloudFront console, select the distribution again, and then click Cache Behavior.
    Screenshot showing Cache Behavior button
  2. In my case, I only have one behavior in my distribution. (If I had more behaviors, I would repeat the process for each behavior that I wanted to have HTTP-to-HTTPS redirection) and click Edit.
  3. Next to Viewer Protocol Policy, choose Redirect HTTP to HTTPS, and click Yes, Edit at the bottom of the page.
    Screenshot of choosing Redirect HTTP to HTTPS

I could also consider employing an HTTP Strict Transport Security (HSTS) policy on my website. In this case, I would add a Strict-Transport-Security response header at my origin to instruct browsers and other applications to make only HTTPS requests to my website for a period of time specified in the header’s value. This ensures that if a user submits a URL to my website specifying only HTTP, the browser will make an HTTPS request anyway. This is also useful for websites that link to my website using HTTP URLs.

Summary

CloudFront and ACM enable more secure communication between your users and your websites. CloudFront allows you to adopt HTTPS for your websites and APIs. ACM provides a simple way to request, manage, and renew your SSL/TLS certificates, and deploy those to AWS services such as CloudFront. Mobile application developers and API providers can more easily meet Apple’s ATS requirements now using CloudFront, in time for the January 2017 deadline.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, please start a new thread on the CloudFront forum.

– Lee

Web2Web: Serverless Websites Powered by Torrents & Bitcoin

Post Syndicated from Ernesto original https://torrentfreak.com/web2web-serverless-websites-powered-by-torrents-bitcoin-161008/

servers-noWhile most people still associate torrents with desktop clients, the browser-based WebTorrent equivalent is quickly gaining popularity.

Simply put, WebTorrent is a torrent client for the web. Instead of using standalone applications it allows people to share files directly from their browser, without having to configure or install anything.

This allows people to stream videos directly from regular browsers such as Chrome and Firefox, similar to what they would do on YouTube.

The technology, created by Stanford University graduate Feross Aboukhadijeh, already piqued the interest of Netflix and also resulted in various innovative implementations.

Most recently, Czech developer Michal Spicka created a the Web2Web project, which allows people to share entire websites using WebTorrent technology. This makes these sites virtually impossible to take down.

Michal tells TorrentFreak that he is fascinated by modern technology and wanted to develop a resilient, serverless and anonymous platform for people to share something online.

“In the past we’ve seen powerful interest groups shut down legitimate websites. I wondered if I could come up with something that can’t be taken down that easily and also protects the site operator’s identity,” Michal says.

For most websites the servers and domain names are the most vulnerable aspects. Both can be easily seized and are far from anonymous. With Web2Web, however, people can run a website without any of the above.

“To run a Web2Web website neither the server nor the domain is required. All you need is a bootstrap page that loads your website from the torrent network and displays it in the browser,” Michal tells us.

While there are similar alternatives available, such as Zeronet, the beauty of Web2Web is that it works in any modern browser. This means that there’s no need to install separate software.

The bootstrap page that serves all content is a simple HTML file that can be mirrored anywhere online or downloaded to a local computer. With help from Bitcoin the ‘operator’ can update the file, after which people will see the new version.

“If the website operator wants to publish new content on his previously created website, he creates a torrent of the new content first and then inserts the torrent infohash into a bitcoin transaction sent from his bitcoin address,” Michal says.

“The website is constantly watching that address for new transactions, extracts the infohash, downloads the new content from the torrent swarm, and updates itself accordingly,” he adds.

For Michal the project is mostly just an interesting experiment. The main goal was to show that it’s possible to make working websites without any central server involved, using WebTorrent and bitcoin.

He has no clear vision on how people will use it, but stresses and he’s not promoting or encouraging illegal uses in any way.

“I’m strongly against using it for anything illegal. On the other hand, I can’t prevent people from doing that. The moment will come when this project gets abused and only then we will see if it’s really that resilient,” he notes.

In the meantime, this perfectly legal demo gives people and idea of what’s possible. More info on how to create distributed pages is available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Succeeding MegaZeux

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/06/succeeding-megazeux/

In the beginning, there was ZZT. ZZT was a set of little shareware games for DOS that used VGA text mode for all the graphics, leading to such whimsical Rogue-like choices as ä for ammo pickups, Ω for lions, and for keys. It also came with an editor, including a small programming language for creating totally custom objects, which gave it the status of “game creation system” and a legacy that survives even today.

A little later on, there was MegaZeux. MegaZeux was something of a spiritual successor to ZZT, created by (as I understand it) someone well-known for her creative abuse of ZZT’s limitations. It added quite a few bells and whistles, most significantly a built-in font editor, which let aspiring developers draw simple sprites rather than rely on whatever they could scrounge from the DOS font.

And then…

And then, nothing. MegaZeux was updated for quite a while, and (unlike ZZT) has even been ported to SDL so it can actually run on modern operating systems. But there was never a third entry in this series, another engine worthy of calling these its predecessors.

I think that’s a shame.

The legacy

Plenty of people have never heard of ZZT, and far more have never heard of MegaZeux, so here’s a brief primer.

Both were released as “first-episode” shareware: they came with one game free, and you could pony up some cash to get the sequels. Those first games — Town of ZZT and Caverns of Zeux — have these moderately iconic opening scenes.

Town of ZZT
Caverns of Zeux

In the intervening decades, all of the sequels have been released online for free. If you want to try them yourself, ZZT 3.2 includes Town of ZZT and its sequels (but must be run in DOSBox), and you can get MegaZeux 2.84c, Caverns of Zeux, and the rest of the Zeux series separately.

Town of ZZT has you, the anonymous player, wandering around a loosely-themed “town” in search of five purple keys. It’s very much a game of its time: the setting is very vague but manages to stay distinct and memorable with very light touches; the puzzles range from trivial to downright cruel; the interface itself fights against you, as you can’t carry more than one purple key at a time; and the game can be softlocked in numerous ways, only some of which have advance warning in the form of “SAVE!!!” written carved directly into the environment.

The armory, and a gruff guardian
Darkness, which all players love
A few subtle hints

Caverns of Zeux is a little more cohesive, with a (thin) plot that unfolds as you progress through the game. Your objectives are slightly vaguer; you start out only knowing you’re trapped in a cave, and further information must be gleaned from NPCs. The gameplay is shaken up a couple times throughout — you discover spellbooks that give you new abilities, but later lose your primary weapon. The meat of the game is more about exploring and less about wacky Sokoban puzzles, though with many of the areas looking very similar and at least eight different-colored doors scattered throughout the game, the backtracking can get frustrating.

A charming little town
A chasm with several gem holders
The ice caves, or maybe caverns

Those are obviously a bit retro-looking now, but they’re not bad for VGA text made by individual hobbyists in 1991 and 1994. ZZT only even uses CGA’s eight bright colors. MegaZeux takes a bit more advantage of VGA capabilities to let you edit the palette as well as the font, but games are still restricted to only using 16 colors at one time.

The font ZZT was stuck with
MegaZeux's default character set

That’s great, but who cares?

A fair question!

ZZT and MegaZeux both occupy a unique game development niche. It’s the same niche as (Z)Doom, I think, and a niche that very few other tools fill.

I’ve mumbled about this on Twitter a couple times, and several people have suggested that the PICO-8 or Mario Maker might be in the same vein. I disagree wholeheartedly! ZZT, MegaZeux, and ZDoom all have two critical — and rare — things in common.

  1. You can crack open the editor, draw a box, and have a game. On the PICO-8, you are a lonely god in an empty void; you must invent physics from scratch before you can do anything else. ZZT, MegaZeux, and Doom all have enough built-in gameplay to make a variety of interesting levels right out of the gate. You can treat them as nothing more than level editors, and you’ll be hitting the ground running — no code required. And unlike most “no programming” GCSes, I mean that literally!

  2. If and when you get tired of only using the built-in objects, you can extend the engine. ZZT and MegaZeux have programmable actors built right in. Even vanilla Doom was popular enough to gain a third-party tool, DEHACKED, which could edit the compiled doom.exe to customize actor behavior. Mario Maker might be a nice and accessible environment for making games, but at the end of the day, the only thing you can make with it is Mario.

Both of these properties together make for a very smooth learning curve. You can open the editor and immediately make something, rather than needing to absorb a massive pile of upfront stuff before you can even get a sprite on the screen. Once you need to make small tweaks, you can dip your toes into robots — a custom pickup that gives you two keys at once is four lines of fairly self-explanatory code. Want an NPC with a dialogue tree? That’s a little more complex, but not much. And then suddenly you discover you’re doing programming. At the same time, you get rendering, movement, combat, collision, health, death, pickups, map transitions, menus, dialogs, saving/loading… all for free.

MegaZeux has one more nice property, the art learning curve. The built-in font is perfectly usable, but a world built from monochrome 8×14 tiles is a very comfortable place to dabble in sprite editing. You can add eyebrows to the built-in player character or slightly reshape keys to fit your own tastes, and the result will still fit the “art style” of the built-in assets. Want to try making your own sprites from scratch? Go ahead! It’s much easier to make something that looks nice when you don’t have to worry about color or line weight or proportions or any of that stuff.

It’s true that we’re in an “indie” “boom” right now, and more game-making tools are available than ever before. A determined game developer can already choose from among dozens (hundreds?) of editors and engines and frameworks and toolkits and whatnot. But the operative word there is “determined“. Not everyone has their heart set on this. The vast majority of people aren’t interested in devoting themselves to making games, so the most they’d want to do (at first) is dabble.

But programming is a strange and complex art, where dabbling can be surprisingly difficult. If you want to try out art or writing or music or cooking or dance or whatever, you can usually get started with some very simple tools and a one-word Google search. If you want to try out game development, it usually requires programming, which in turn requires a mountain of upfront context and tool choices and explanations and mysterious incantations and forty-minute YouTube videos of some guy droning on in monotone.

To me, the magic of MegaZeux is that anyone with five minutes to spare can sit down, plop some objects around, and have made a thing.

Deep dive

MegaZeux has a lot of hidden features. It also has a lot of glass walls. Is that a phrase? It should be a phrase. I mean that it’s easy to find yourself wanting to do something that seems common and obvious, yet find out quite abruptly that it’s structurally impossible.

I’m not leading towards a conclusion here, only thinking out loud. I want to explain what makes MegaZeux interesting, but also explain what makes MegaZeux limiting, but also speculate on what might improve on it. So, you know, something for everyone.

Big picture

MegaZeux is a top-down adventure-ish game engine. You can make platformers, if you fake your own gravity; you can make RPGs, if you want to build all the UI that implies.

MegaZeux games can only be played in, well, MegaZeux. Games that need instructions and multiple downloads to be played are fighting an uphill battle. It’s a simple engine that seems reasonable to deploy to the web, and I’ve heard of a couple attempts at either reimplementing the engine in JavaScript or throwing the whole shebang at emscripten, but none are yet viable.

People have somewhat higher expectations from both games and tools nowadays. But approachability is often at odds with flexibility. The more things you explicitly support, the more complicated and intimidating the interface — or the more hidden features you have to scour the manual to even find out about.

I’ve looked through the advertising screenshots of Game Maker and RPG Maker, and I’m amazed how many things are all over the place at any given time. It’s like trying to configure the old Mozilla Suite. Every new feature means a new checkbox somewhere, and eventually half of what new authors need to remember is the set of things they can safely ignore.

SLADE’s Doom map editor manages to be much simpler, but I’m not particularly happy with that, either — it’s not clever enough to save you from your mistakes (or necessarily detect them), and a lot of the jargon makes no sense unless you’ve already learned what it means somewhere else. Plus, making the most of ZDoom’s extra features tends to involve navigating ten different text files that all have different syntax and different rules.

MegaZeux has your world, some menus with objects in them, and spacebar to place something. The UI is still very DOS-era, but once you get past that, it’s pretty easy to build something.

How do you preserve that in something “modern”? I’m not sure. The only remotely-similar thing I can think of is Mario Maker, which cleverly hides a lot of customization options right in the world editor UI: placing wings on existing objects, dropping objects into blocks, feeding mushrooms to enemies to make them bigger. The downside is that Mario Maker has quite a lot of apocryphal knowledge that isn’t written down anywhere. (That’s not entirely a downside… but I could write a whole other post just exploring that one sentence.)

Graphics

Oh, no.

Graphics don’t make the game, but they’re a significant limiting factor for MegaZeux. Fixing everything to a grid means that even a projectile can only move one tile at a time. Only one character can be drawn per grid space, so objects can’t usefully be drawn on top of each other. Animations are difficult, since they eat into your 255-character budget, which limits real-time visual feedback. Most individual objects are a single tile — creating anything larger requires either a lot of manual work to keep all the parts together, or the use of multi-tile sprites which don’t quite exist on the board.

And yet! The same factors are what make MegaZeux very accessible. The tiles are small and simple enough that different art styles don’t really clash. Using a grid means simple games don’t have to think about collision detection at all. A monochromatic font can be palette-shifted, giving you colorful variants of the same objects for free.

How could you scale up the graphics but preserve the charm and approachability? Hmm.

I think the palette restrictions might be important here, but merely bumping from 2 to 8 colors isn’t quite right. The palette-shifting in MegaZeux always makes me think of keys first, and multi-colored keys make me think of Chip’s Challenge, where the key sprites were simple but lightly shaded.

All four Chips Challenge 2 keys

The game has to contain all four sprites separately. If you wanted to have a single sprite and get all of those keys by drawing it in different colors, you’d have to specify three colors per key: the base color, a lighter color, and a darker color. In other words, a ramp — a short gradient, chosen from a palette, that can represent the same color under different lighting. Here are some PICO-8 ramps, for example. What about a sprite system that drew sprites in terms of ramps rather than individual colors?

A pixel-art door in eight different color schemes

I whipped up this crappy example to illustrate. All of the doors are fundamentally the same image, and all of them use only eight colors: black, transparent, and two ramps of three colors each. The top-left door could be expressed as just “light gray” and “blue” — those colors would be expanded into ramps automatically, and black would remain black.

I don’t know how well this would work, but I’d love to see someone try it. It may not even be necessary to require all sprites be expressed this way — maybe you could import your own truecolor art if you wanted. ZDoom works kind of this way, though it’s more of a historical accident: it does support arbitrary PNGs, but vanilla Doom sprites use a custom format that’s in terms of a single global palette, and only that custom format can be subjected to palette manipulation.


Now, MegaZeux has the problem that small sprites make it difficult to draw bigger things like UI (or a non-microscopic player). The above sprites are 32×32 (scaled up 2× for ease of viewing here), which creates the opposite problem: you can’t possibly draw text or other smaller details with them.

I wonder what could be done here. I know that the original Pokémon games have a concept of “metatiles”: every map is defined in terms of 4×4 blocks of smaller tiles. You can see it pretty clearly on this map of Pallet Town. Each larger square is a metatile, and many of them repeat, even in areas that otherwise seem different.

Pallet Town from Pokémon Red, carved into blocks

I left the NPCs in because they highlight one of the things I found most surprising about this scheme. All the objects you interact with — NPCs, signs, doors, items, cuttable trees, even the player yourself — are 16×16 sprites. The map appears to be made out of 16×16 sprites, as well — but it’s really built from 8×8 tiles arranged into bigger 32×32 tiles.

This isn’t a particularly nice thing to expose directly to authors nowadays, but it demonstrates that there are other ways to compose tiles besides the obvious. Perhaps simple terrain like grass and dirt could be single large tiles, but you could also make a large tile by packing together several smaller tiles?

Text? Oh, text can just be a font.

Player status

MegaZeux has no HUD. To know how much health you have, you need to press Enter to bring up the pause menu, where your health is listed in a stack of other numbers like “gems” and “coins”. I say “menu”, but the pause menu is really a list of keyboard shortcuts, not something you can scroll through and choose items from.

MegaZeux's in-game menu, showing a list of keyboard shortcuts on the left and some stats on the right

To be fair, ZZT does reserve the right side of the screen for your stats, and it puts health at the top. I find myself scanning the MegaZeux pause menu for health every time, which seems a somewhat poor choice for the number that makes the game end when you run out of it.

Unlike most adventure games, your health is an integer starting at 100, not a small number of hearts or whatever. The only feedback when you take damage is a sound effect and an “Ouch!” at the bottom of the screen; you don’t flinch, recoil, or blink. Health pickups might give you any amount of health, you can pick up health beyond 100, and nothing on the screen tells you how much you got when you pick one up. Keeping track of your health in your head is, ah, difficult.

MegaZeux also has a system of multiple lives, but those are also just a number, and the default behavior on “death” is for your health to reset to 100 and absolutely nothing else happens. Walking into lava (which hurts for 100 at a time) will thus kill you and strip you of all your lives quite rapidly.

It is possible to manually create a HUD in MegaZeux using the “overlay” layer, a layer that gets drawn on top of everything else in the world. The downside is that you then can’t use the overlay for anything in-world, like roofs or buildings that can be walked behind. The overlay can be in multiple modes, one that’s attached to the viewport (like a HUD) and one that’s attached to the world (like a ceiling layer), so an obvious first step would be offering these as separate features.

An alternative is to use sprites, blocks of tiles created and drawn as a single unit by Robotic code. Sprites can be attached to the viewport and can even be drawn even above the overlay, though they aren’t exposed in the editor and must be created entirely manually. Promising, if clumsy and a bit non-obvious — I only just now found out about this possibility by glancing at an obscure section of the manual.

Another looming problem is that text is the same size as everything else — but you generally want a HUD to be prominent enough to glance at very quickly.

This makes me wonder how more advanced drawing could work in general. Instead of writing code by hand to populate and redraw your UI, could you just drag and drop some obvious components (like “value of this number”) onto a layer? Reuse the same concept for custom dialogs and menus, perhaps?

Inventory

MegaZeux has no inventory. Or, okay, it has sort of an inventory, but it’s all over the place.

The stuff in the pause menu is kind of like an inventory. It counts ammo, gems, coins, two kinds of bombs, and a variety of keys for you. The game also has multiple built-in objects that can give you specific numbers of gems and coins, which is neat, except that gems and coins don’t do actually anything. I think they increase your score, but until now I’d forgotten that MegaZeux has a score.

A developer can also define six named “counters” (i.e., integers) that will show up on the pause menu when nonzero. Caverns of Zeux uses this to show you how many rainbow gems you’ve discovered… but it’s just a number labeled RainbowGems, and there’s no way to see which ones you have.

Other than that, you’re on your own. All of the original Zeux games made use of an inventory, so this is a really weird oversight. Caverns of Zeux also had spellbooks, but you could only see which ones you’d found by trying to use them and seeing if it failed. Chronos Stasis has maybe a dozen items you can collect and no way to see which ones you have — though, to be fair, you use most of them in the same place. Forest of Ruin has a fairly standard inventory, but no way to view it. All three games have at least one usable item that they just bind to a key, which you’d better remember, because it’s game-specific and thus not listed in the general help file.

To be fair, this is preposterously flexible in a way that a general inventory might not be. But it’s also tedious for game authors and potentially confusing for players.

I don’t think an inventory would be particularly difficult to support, and MegaZeux is already halfway there. Most likely, the support is missing because it would need to be based on some concept of a custom object, and MegaZeux doesn’t have that either. I’ll get to that in a bit.

Creating new objects

MegaZeux allows you to create “robots”, objects that are controlled entirely through code you write in a simple programming language. You can copy and paste robots around as easily as any other object on the map. Cool.

What’s less cool is that robots can’t share code — when you place one, you make a separate copy of all of its code. If you create a small horde of custom monsters, then later want to make a change, you’ll have to copy/paste all the existing ones. Hope you don’t have them on other boards!

Some workarounds exist: you could make use of robots’ ability to copy themselves at runtime, and it’s possible to save or load code to/from an external file at runtime. More cumbersome than defining a template object and dropping it wherever you want, and definitely much less accessible.

This is really, really bad, because the only way to extend any of the builtin objects is to replace them with robots!

I’m a little spoiled by ZDoom, where you can create as many kinds of actor as you want. Actors can even inherit from one another, though the mechanism is a little limited and… idiosyncratic, so I wouldn’t call it beginner-friendly. It’s pretty nice to be able to define a type of monster or decoration and drop it all over a map, and I’m surprised such a thing doesn’t exist in MegaZeux, where boards and the viewport both tend to be fairly large.

This is the core of how ZDoom’s inventory works, too. I believe that inventories contain only kinds, not individual actors — that is, you can have 5 red keys, but the game only knows “5 of RedCard” rather than having five distinct RedCard objects. I’m sure part of the reason MegaZeux has no general-purpose inventory is that every custom object is completely distinct, with nothing fundamentally linking even identical copies of the same robot together.

Combat

By default, the player can shoot bullets by holding Space and pressing a direction. (Moving and shooting at the same time is… difficult.) Like everything else, bullets are fixed to the character grid, so they move an entire tile at a time.

Bullets can also destroy other projectiles, sometimes. A bullet hitting another bullet will annihilate both. A bullet hitting a fireball might either turn the fireball into a regular fire tile or simple be destroyed, depending on which animation frame the fireball is in when the bullet hits it. I didn’t know this until someone told me only a couple weeks ago; I’d always just thought it was random and arbitrary and frustrating. Seekers can’t be destroyed at all.

Most enemies charge directly at you; most are killed in one hit; most attack you by colliding with you; most are also destroyed by the collision.

The (built-in) combat is fairly primitive. It gives you something to do, but it’s not particularly satisfting, which is unfortunate for an adventure game engine.

Several factors conspire here. Graphical limitations make it difficult to give much visual feedback when something (including the player) takes damage or is destroyed. The motion of small, fast-moving objects on a fixed grid can be hard to keep track of. No inventory means weapons aren’t objects, either, so custom weapons need to be implemented separately in the global robot. No custom objects means new enemies and projectiles are difficult to create. No visual feedback means hitscan weapons are implausible.

I imagine some new and interesting directions would make themselves obvious in an engine with a higher resolution and custom objects.

Robotic

Robotic is MegaZeux’s programming language for defining the behavior of robots, and it’s one of the most interesting parts of the engine. A robot that acts like an item giving you two keys might look like this:

1
2
3
4
5
6
end
: "touch"
* "You found two keys!"
givekey c04
givekey c05
die as an item
MegaZeux's Robotic editor

Robotic has no blocks, loops, locals, or functions — though recent versions can fake functions by using special jumps. All you get is a fixed list of a few hundred commands. It’s effectively a form of bytecode assembly, with no manual assembling required.

And yet! For simple tasks, it works surprisingly well. Creating a state machine, as in the code above, is straightforward. end stops execution, since all robots start executing from their first line on start. : "touch" is a label (:"touch" is invalid syntax) — all external stimuli are received as jumps, and touch is a special label that a robot jumps to when the player pushes against it. * displays a message in the colorful status line at the bottom of the screen. givekey gives a key of a specific color — colors are a first-class argument type, complete with their own UI in the editor and an automatic preview of the particular colors. die as an item destroys the robot and simultaneously moves the player on top of it, as though the player had picked it up.

A couple other interesting quirks:

  • Most prepositions, articles, and other English glue words are semi-optional and shown in grey. The line die as an item above has as an greyed out, indicating that you could just type die item and MegaZeux would fill in the rest. You could also type die as item, die an item, or even die through item, because all of as, an, and through act like whitespace. Most commands sprinkle a few of these in to make themselves read a little more like English and clarify the order of arguments.

  • The same label may appear more than once. However, labels may be zapped, and a jump will always go to the first non-zapped occurrence of a label. This lets an author encode a robot’s state within the state of its own labels, obviating the need for state-tracking variables in many cases. (Zapping labels predates per-robot variables — “local counters” — which are unhelpfully named local through local32.)

    Of course, this can rapidly spiral out of control when state changes are more complicated or several labels start out zapped or different labels are zapped out of step with each other. Robotic offers no way to query how many of a label have been zapped and MegaZeux has no debugger for label states, so it’s not hard to lose track of what’s going on. Still, it’s an interesting extension atop a simple label-based state machine.

  • The built-in types often have some very handy shortcuts. For example, GO [dir] # tells a robot to move in some direction, some number of spaces. The directions you’d expect all work: NORTH, SOUTH, EAST, WEST, and synonyms like N and UP. But there are some extras like RANDNB to choose a random direction that doesn’t block the robot, or SEEK to move towards the player, or FLOW to continue moving in its current direction. Some of the extras only make sense in particular contexts, which complicates them a little, but the ability to tell an NPC to wander aimlessly with only RANDNB is incredible.

  • Robotic is more powerful than you might expect; it can change anything you can change in the editor, emulate the behavior of most other builtins, and make use of several features not exposed in the editor at all.

Nowadays, the obvious choice for an embedded language is Lua. It’d be much more flexible, to be sure, but it’d lose a little of the charm. One of the advantages of creating a totally custom language for a game is that you can add syntax for very common engine-specific features, like colors; in a general-purpose language, those are a little clumsier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
function myrobot:ontouch(toucher)
    if not toucher.is_player then
        return false
    end
    world:showstatus("You found two keys!")
    toucher.inventory:add(Key{color=world.colors.RED})
    toucher.inventory:add(Key{color=world.colors.PURPLE})
    self:die()
    return true
end

Changing the rules

MegaZeux has a couple kinds of built-in objects that are difficult to replicate — and thus difficult to customize.

One is projectiles, mentioned earlier. Several variants exist, and a handful of specific behaviors can be toggled with board or world settings, but otherwise that’s all you get. It should be feasible to replicate them all with robots, but I suspect it’d involve a lot of subtleties.

Another is terrain. MegaZeux has a concept of a floor layer (though this is not explicitly exposed in the editor) and some floor tiles have different behavior. Ice is slippery; forest blocks almost everything but can be trampled by the player; lava hurts the player a lot; fire hurts the player and can spread, but burns out after a while. The trick with replicating these is that robots cannot be walked on. An alternative is to use sensors, which can be walked on and which can be controlled by a robot, but anything other than the player will push a sensor rather than stepping onto it. The only other approach I can think of is to keep track of all tiles that have a custom terrain, draw or animate them manually with custom floor tiles, and constantly check whether something’s standing there.

Last are powerups, which are really effects that rings or potions can give you. Some of them are special cases of effects that Robotic can do more generally, such as giving 10 health or changing all of one object into another. Some are completely custom engine stuff, like “Slow Time”, which makes everything on the board (even robots!) run at half speed. The latter are the ones you can’t easily emulate. What if you want to run everything at a quarter speed, for whatever reason? Well, you can’t, short of replacing everything with robots and doing a multiplication every time they wait.

ZDoom has a similar problem: it offers fixed sets of behaviors and powerups (which mostly derive from the commercial games it supports) and that’s it. You can manually script other stuff and go quite far, but some surprisingly simple ideas are very difficult to implement, just because the engine doesn’t offer the right kind of hook.

The tricky part of a generic engine is that a game creator will eventually want to change the rules, and they can only do that if the engine has rules for changing those rules. If the engine devs never thought of it, you’re out of luck.

Someone else please carry on this legacy

MegaZeux still sees development activity, but it’s very sporadic — the last release was in 2012. New features tend to be about making the impossible possible, rather than making the difficult easier. I think it’s safe to call MegaZeux finished, in the sense that a novel is finished.

I would really like to see something pick up its torch. It’s a very tricky problem, especially with the sprawling complexity of games, but surely it’s worth giving non-developers a way to try out the field.

I suppose if ZZT and MegaZeux and ZDoom have taught us anything, it’s that the best way to get started is to just write a game and give it very flexible editing tools. Maybe we should do that more. Maybe I’ll try to do it with Isaac’s Descent HD, and we’ll see how it turns out.

Chrome and Firefox Brand The Pirate Bay As a “Phishing” Site…..Again

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-brand-the-pirate-bay-as-a-phishing-site-again-161006/

thepirateMillions of Pirate Bay users are currently unable to access the torrent detail pages on the site without receiving a stark warning.

Over the past few hours Chrome and Firefox have started to block access to ThePirateBay.org due to reported security issues.

The homepage and various categories can be reached without problems, but when visitors navigate to a download page they are presented with an ominous red warning banner.

“Deceptive site ahead: Attackers on Thepiratebay.org may trick you into doing something dangerous like installing software or revealing your personal information,” it reads.

“Google Safe Browsing recently detected phishing on thepiratebay.org. Phishing sites pretend to be other websites to trick you,” the Chrome warning adds.

Chrome’s latest Pirate Bay warning

piratebayphishing

Firefox is showing a similar error message, as do all applications and services that use Google’s safe browsing database, which currently lists TPB as “partially dangerous.”

According to Google the notorious torrent site is linked to a phishing effort, where malicious actors try to steal the personal information of visitors.

It’s likely that the security error is caused by a malicious third-party advertisement. The TPB team informs TorrentFreak that they are aware of the issue, which they hope will be resolved soon.

This is not the first time that The Pirate Bay has been flagged by Google’s safe browsing filter. The same happened just a month ago, when the site was accused of spreading “harmful programs.” That warning eventually disappeared after a few days.

By now, most Chrome and Firefox users should be familiar with these intermittent warning notices. Those who are in a gutsy mood can simply “ignore the warning” or take steps (Chrome, FF) to bypass the blocks permanently.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Malicious Torrent Network Tool Revealed By Security Company

Post Syndicated from Andy original https://torrentfreak.com/malicious-torrent-network-tool-revealed-by-security-company-160921/

danger-p2pMore than 35 years after 15-year-old high school student Rich Skrenta created the first publicly spread virus, millions of pieces of malware are being spread around the world.

Attackers’ motives are varied but these days they’re often working for financial gain. As a result, popular websites and their users are regularly targeted. Security company InfoArmor has just published a report detailing a particularly interesting threat which homes in on torrent site users.

“InfoArmor has identified a special tool used by cybercriminals to distribute malware by packaging it with the most popular torrent files on the Internet,” the company reports.

InfoArmor says the so-called “RAUM” tool is being offered via “underground affiliate networks” with attackers being financially incentivized to spread the malicious software through infected torrent files.

“Members of these networks are invited by special invitation only, with strict verification of each new member,” the company reports.

InfoArmor says that the attackers’ infrastructure has a monitoring system in place which allows them to track the latest trends in downloading, presumably so that attacks can reach the greatest numbers of victims.

“The bad actors have analyzed trends on video, audio, software and other digital content downloads from around the globe and have created seeds on famous torrent trackers using weaponized torrents packaged with malicious code,” they explain.

RAUM instances were associated with a range of malware including CryptXXX, CTB-Locker and Cerber, online-banking Trojan Dridex and password stealing spyware Pony.

“We have identified in excess of 1,639,000 records collected in the past few months from the infected victims with various credentials to online-services, gaming, social media, corporate resources and exfiltrated data from the uncovered network,” InfoArmor reveals.

What is perhaps most interesting about InfoArmor’s research is how it shines light on the operation of RAUM behind the scenes. The company has published a screenshot which claims to show the system’s dashboard, featuring infected torrents on several sites, a ‘fake’ Pirate Bay site in particular.

dashtorrents

“Threat actors were systematically monitoring the status of the created malicious seeds on famous torrent trackers such as The Pirate Bay, ExtraTorrent and many others,” the researchers write.

“In some cases, they were specifically looking for compromised accounts of other users on these online communities that were extracted from botnet logs in order to use them for new seeds on behalf of the affected victims without their knowledge, thus increasing the reputation of the uploaded files.”

raum-1

According to InfoArmor the malware was initially spread using uTorrent, although any client could have done the job. More recently, however, new seeds have been served through online servers and some hacked devices.

In some cases the malicious files continued to be seeded for more than 1.5 months. Tests by TF on the sample provided showed that most of the files listed have now been removed by the sites in question.

Completely unsurprisingly, people who use torrent sites to obtain software and games (as opposed to video and music files) are those most likely to come into contact with RAUM and associated malware. As the image below shows, Windows 7 and 10 packs and their activators feature prominently.

raum-2

“All of the created malicious seeds were monitored by cybercriminals in order to prevent early detection by [anti-virus software] and had different statuses such as ‘closed,’ ‘alive,’ and ‘detected by antivirus.’ Some of the identified elements of their infrastructure were hosted in the TOR network,” InfoArmor explains.

The researchers say that RAUM is a tool used by an Eastern European organized crime group known as Black Team. They also report several URLs and IP addresses from where the team operates. We won’t publish them here but it’s of some comfort to know that between Chrome, Firefox and MalwareBytes protection, all were successfully blocked on our test machine.

InfoArmor concludes by warning users to exercise extreme caution when downloading pirated digital content. We’d go a step further and advise people to be wary of installing all software from any untrusted sources, no matter where they’re found online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Chrome and Firefox Block Pirate Bay Over “Harmful Programs”

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-pirate-bay-over-harmful-programs-160915/

thepirateStarting a few hours ago Chrome and Firefox users are unable to access The Pirate Bay’s torrent download pages without running into a roadblock.

Instead of a page filled with the latest torrents, visitors now see an ominous red warning banner when they try to grab a torrent.

“The site ahead contains harmful programs,” Google Chrome informs its users.

“Attackers on thepiratebay.org might attempt to trick you into installing programs that harm your browsing experience (for example, by changing your homepage or showing extra ads on sites you visit),” the warning adds.

Mozilla’s Firefox browser displays a similar message.

While Pirate Bay’s homepage and search is still freely available, torrent detail pages now show the following banner.

Chrome’s Pirate Bay block

chromeharmtpb

Both Chrome and Firefox rely on Google’s Safe Browsing report which currently lists TPB as a partially dangerous site.

In addition to the two browsers, people who use Comodo’s Secure DNS also experienced problems reaching the site.

Comodo’s secure DNS has a built-in malware domain filtering feature and earlier today it flagged the Pirate Bay as a “hacking” site, as the banner below shows. Shortly before publishing this warning disappeared.

Pirate Bay hacking?

piratebayhack

Comodo DNS still blocks access to ExtraTorrent, the second largest torrent site trailing just behind The Pirate Bay.

The secure DNS provider accuses ExtraTorrent of spreading “malicious” content. Interestingly, Google’s Safe Browsing doesn’t report any issues with ExtraTorrent’s domain name, so another source may play a role here.

This isn’t the first time that Comodo has blocked torrent sites and usually the warnings disappear again after a few hours or days. Until then, users can add the domains to a whitelist to regain access. Of course, they should do so at their own risk.

Chrome and Firefox users should be familiar with these intermittent warning notices as well, and can take steps to bypass the blocks if they are in a gutsy mood.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Stream Ripping Problem Worse Than Pirate Sites, IFPI Says

Post Syndicated from Andy original https://torrentfreak.com/stream-ripping-problem-worse-than-pirate-sites-ifpi-says-160913/

sadyoutubeOne of the recurring themes of recent years has been entertainment industry criticism of Google alongside claims the search giant doesn’t do enough to tackle piracy.

In more recent months, the focus has fallen on YouTube in particular, with the music industry painting the video hosting site as a haven for unlicensed tracks. This, the labels say, allows YouTube to undermine competitors and run a ‘DMCA protection racket‘.

While complaints surrounding the so-called “value gap” continue, the labels are now revisiting another problem that has existed for years.

For the unfamiliar, stream ripping is a mechanism for obtaining music from an online source and storing it on a local storage device in MP3 or similar format. Ripping can be achieved by using dedicated software or via various sites designed for the purpose.

With the largest library online, YouTube is the most popular destination for ‘rippers’. Broadly speaking, the site carries two kinds of music – that for which the site has a license and that uploaded without permission by its users. The labels consider the latter as straightforward piracy but the former is also problematic in a stream-ripping environment. Once a track is downloaded by a user from YouTube, labels aren’t getting paid per play anymore.

According to IFPI, the stream-ripping problem has become huge. A new study by Ipsos commissioned by IFPI has found that 49% of Internet users aged 16 to 24 admitted to stream ripping in the six months ending April. That’s a 41% increase over the same period a year earlier.

When considering all age groups the situation eases somewhat, but not by enough to calm IFPI’s nerves. Ipsos found that 30% of all Internet users had engaged in stream ripping this year, that’s 10% up on a year earlier.

In fact, according to comments made to FT (subscription) by IFPI, the problem has become so large that it is now the most popular form of online piracy, surpassing downloading from all of the world’s ‘pirate’ sites.

Precisely why there has been such a large increase isn’t clear, but it’s likely that the simplicity of sites such as YouTube-MP3 has played a big role. The site is huge by any measurement and has been extremely popular for many years. However, this year has seen a dramatic increase in visits, as shown below.

youtube-mp3-traffic

Equally, with pirate site blockades springing up all over the world, users in affected regions will find YouTube and ripping sites much easier to access. Also, rippers tend to work well on mobile phones, giving young people the portability they desire for their music.

But while YouTube and Google will now find themselves under yet more pressure, the company hasn’t been silent on the issue of stream-ripping. On several occasions, YouTube lawyers have made legal threats against such sites, including YouTube-MP3 in 2012 and more recently against TubeNinja.

“We strive to keep YouTube a safe, responsible community, and to encourage respect for the rights of millions of YouTube creators,” an email from YouTube’s legal team to TubeNinja read.

“This requires compliance with the Terms of Service and API Terms of Service. We hope that you will cooperate with us by ceasing to offer TubeNinja with functionality that is designed to allow users to download content from YouTube within seven days of this letter.”

While it is indeed the biggest platform, the problem isn’t only limited to YouTube. Stream rippers are available for most streaming sites including Vimeo, Soundcloud, Bandcamp, Mixcloud, and many dozens of others, with Google itself providing convenient addons for its Chrome browser.

With the major labels now describing stream-ripping as the biggest piracy threat, expect to hear much more on this topic as the year unfolds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

WebTorrent: 250K Downloads & Strong With Zero Revenue

Post Syndicated from Andy original https://torrentfreak.com/webtorrent-250k-downloads-strong-with-zero-revenue-160827/

Stanford University graduate Feross Aboukhadijeh is passionate about P2P technology. The founder of
P2P-assisted content delivery network PeerCDN (sold to Yahoo in 2013), Feross is also the inventor of WebTorrent.

In its classic form, WebTorrent is a BitTorrent client for the web. No external clients are needed for people to share files since everything is done in the user’s web browser with Javascript. No browser plugins or extensions need to be installed, nothing needs to be configured.

In the beginning, some doubted that it could ever work, but Feross never gave up on his dream.

“People thought WebTorrent was crazy. One of the Firefox developers literally said it wouldn’t be possible. I was like, ‘challenge accepted’,” Feross told TF this week.

WebTorrent

webt

A few months after WebTorrent’s debut, Feross announced the arrival of WebTorrent Desktop (WD), a standalone torrent client with a few tricks up its sleeve.

After posting a torrent or magnet link into its somewhat unusual client interface, content can be played almost immediately via an inbuilt player. And with AirPlay, Chromecast and DLNA support, WD is at home at the heart of any multi-display household.

webdesk-main

But WebTorrent Desktop’s most interesting feature is its ability to find peers not only via trackers, DHT and PEX, but also using the WebTorrent protocol. This means that WD can share content with people using the web-based version of WebTorrent too.

WebTorrent Desk

Since our April report, WebTorrent has been under constant development. It is now more responsive and uses fewer resources, casting has been improved, and subtitles are auto-detected, to name just a few improvements. As a result, the client has been growing its userbase too.

“The WebTorrent project is going full steam ahead and there has been lots of progress in the past few months,” Feross informs TF.

“We just passed a quarter million total downloads of the app – 254,431 downloads as of right now.”

For a young and totally non-commercial project, that’s an impressive number, but the accolades don’t stop there. The project currently has more than 2,083 stars on Github and it recently added its 26th new contributor.

In all, WebTorrent has nine people working on the core team, but since the client is open source and totally non-commercial, no one is earning anything from the project. According to Feross, this only makes WebTorrent stronger.

“People usually think that having revenue, investors, and employees gives you an advantage over your competition. That’s definitely true for certain things: you can hire designers, programmers, marketing experts, product managers, etc. to build out the product, add lots of features,” the developer says.

“But you have to pay your employees and investors, and these pressures usually cause companies to resort to adding advertising (or worse) to their products. When you have no desire to make a profit, you can act purely in the interests of the people using your product. In short, you can build a better product.”

So if not money, what drives people like Feross and his team to give up their time to create something and give it away?

“The real reason I care so much about WebTorrent is that I want decentralized apps to win. Right now, it’s so much easier to build a centralized app: it’s faster to build, uses tried-and-true technology, and it’s easier to monetize because the app creator has all the control. They can use that control to show you ads, sell your data, or make unilateral product changes for their own benefit,” he says.

“On the other hand, decentralized apps are censorship resistant, put users in control of their data, and are safe against user-hostile changes.

“That last point is really important. It’s because of the foresight of Bram Cohen that WebTorrent is even possible today: the BitTorrent protocol is an open standard. If you don’t like your current torrent app, you can easily switch! No one person or company has total control.”

WebTorrent Desktop developer DC Posch says that several things motivate him to work on the project, particularly when there’s no one to order him around.

“There’s satisfaction in craftsmanship, shipping something that feels really solid. Second, it’s awesome having 250,000 users and no boss,” he says.

“Third, it’s something that I want to exist. There are places like the Internet Archive that have lots of great material and no money for bandwidth. BitTorrent is a technologically elegant way to do zero cost distribution. Finally, I want to prove that non-commercial can be a competitive advantage. Freed from the need to monetize or produce a return, you can produce a superior product.”

To close, last year TF reported that WebTorrent had caught the eye of Netflix. Feross says that was a great moment for the project.

“It was pretty cool to show off WebTorrent at Netflix HQ. They were really interested in the possibility of WebTorrent to help during peak hours when everyone is watching Netflix and the uplink to ISPs like Comcast gets completely saturated. WebTorrent could help by letting Comcast subscribers share data amongst themselves without needing to traverse the congested Comcast-Netflix internet exchange,” he explains.

For now, WebTorrent is still a relative minnow when compared to giants such as uTorrent but there are an awful lot of people out there who share the ethos of Feross and his team. Only time will tell whether this non-commercial project will fulfill its dreams, but those involved will certainly have fun trying.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.