Tag Archives: chrome

Now Anyone Can Embed a Pirate Movie in a Website

Post Syndicated from Andy original https://torrentfreak.com/now-anyone-can-embed-a-pirate-movie-in-a-website-170522/

While torrents are still the go-to source for millions of users seeking free online media, people are increasingly seeking the immediacy and convenience of web-based streaming.

As a result, hundreds of websites have appeared in recent years, offering Netflix-inspired interfaces that provide an enhanced user experience over the predominantly text-based approach utilized by most torrent sites.

While there hasn’t been a huge amount of innovation in either field recently, a service that raised its head during recent weeks is offering something new and potentially significant, if it continues to deliver on its promises without turning evil.

Vodlocker.to is the latest in a long list of sites using the Vodlocker name, which is bound to cause some level of confusion. However, what this Vodlocker variant offers is a convenient way for users to not only search for and find movies hosted on the Internet, but stream them instantly – with a twist.

After entering a movie’s IMDb code (the one starting ‘tt’) in a box on the page, Vodlocker quickly searches for the movie on various online hosting services, including Google Drive.

Entering the IMDb code

“We believe the complexity of uploading a video has become unnecessary, so we have created much like Google, an automated crawler that visits millions of pages every day to find all videos on the internet,” the site explains.

As shown in the image above, the site takes the iMDb number and generates code. That allows the user to embed an HTML5 video player in their own website, which plays the movie in question. We tested around a dozen movies with a 100% success rate, with search times from a couple of seconds to around 20 seconds maximum.

A demo on the site shows exactly how the embed code currently performs, with the video player offering the usual controls such as play and pause, with a selector for quality and volume levels. The usual ‘full screen’ button sits in the bottom right corner.

The player can be embedded anywhere

Near the top of the window are options for selecting different sources for the video, should it become unplayable or if a better quality version is required. Interestingly, should one of those sources be Google Video, Vodlocker says its player offers Chromecast and subtitle support.

“Built-in chromecast plugin streams free HD movies/tv shows from your website to your TV via Google Chromecast. Built-in opensubtitles.org plugin finds subtitles in all languages and auto-selects your language,” the site reports.

In addition to a link-checker that aims to exclude broken links (missing sources), the service also pulls movie-related artwork from IMDb, to display while the selected movie is being prepared for streaming.

The site is already boasting a “massive database” of movies, which will make it of immediate use to thousands of websites that might want to embed movies or TV shows in their web pages.

As long as Vodlocker can cope with the load, this could effectively spawn a thousand new ‘pirate’ websites overnight but the service generally seems more suited to smaller, blog-like sites that might want to display a smaller selection of titles.

That being said, it’s questionable whether a site would seek to become entirely reliant on a service like this. While the videos it indexes are more decentralized, the service itself could be shut down in the blink of an eye, at which point every link stops working.

It’s also worth noting that the service uses IFrame tags, which some webmasters might feel uncomfortable about deploying on their sites due to security concerns.

The New Vodlocker API demo can be found here, for as long as it lasts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Control TLS Ciphers in Your AWS Elastic Beanstalk Application by Using AWS CloudFormation

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-control-tls-ciphers-in-your-aws-elastic-beanstalk-application-by-using-aws-cloudformation/

Securing data in transit is critical to the integrity of transactions on the Internet. Whether you log in to an account with your user name and password or give your credit card details to a retailer, you want your data protected as it travels across the Internet from place to place. One of the protocols in widespread use to protect data in transit is Transport Layer Security (TLS). Every time you access a URL that begins with “https” instead of just “http”, you are using a TLS-secured connection to a website.

To demonstrate that your application has a strong TLS configuration, you can use services like the one provided by SSL Labs. There are also open source, command-line-oriented TLS testing programs such as testssl.sh (which I do not cover in this post) and sslscan (which I cover later in this post). The goal of testing your TLS configuration is to provide evidence that weak cryptographic ciphers are disabled in your TLS configuration and only strong ciphers are enabled. In this blog post, I show you how to control the TLS security options for your secure load balancer in AWS CloudFormation, pass the TLS certificate and host name for your secure AWS Elastic Beanstalk application to the CloudFormation script as parameters, and then confirm that only strong TLS ciphers are enabled on the launched application by testing it with SSLLabs.

Background

In some situations, it’s not enough to simply turn on TLS with its default settings and call it done. Over the years, a number of vulnerabilities have been discovered in the TLS protocol itself with codenames such as CRIME, POODLE, and Logjam. Though some vulnerabilities were in specific implementations, such as OpenSSL, others were vulnerabilities in the Secure Sockets Layer (SSL) or TLS protocol itself.

The only way to avoid some TLS vulnerabilities is to ensure your web server uses only the latest version of TLS. Some organizations want to limit their TLS configuration to the highest possible security levels to satisfy company policies, regulatory requirements, or other information security requirements. In practice, such limitations usually mean using TLS version 1.2 (at the time of this writing, TLS 1.3 is in the works) and using only strong cryptographic ciphers. Note that forcing a high-security TLS connection in this manner limits which types of devices can connect to your web server. I address this point at the end of this post.

The default TLS configuration in most web servers is compatible with the broadest set of clients (such as web browsers, mobile devices, and point-of-sale systems). As a result, older ciphers and protocol versions are usually enabled. This is true for the Elastic Load Balancing load balancer that is created in your Elastic Beanstalk application as well as for web server software such as Apache and nginx.  For example, TLS versions 1.0 and 1.1 are enabled in addition to 1.2. The RC4 cipher is permitted, even though that cipher is too weak for the most demanding security requirements. If your application needs to prioritize the security of connections over compatibility with legacy devices, you must adjust the TLS encryption settings on your application. The solution in this post helps you make those adjustments.

Prerequisites for the solution

Before you implement this solution, you must have a few prerequisites in place:

  1. You must have a hosted zone in Amazon Route 53 where the name of the secure application will be created. I use example.com as my domain name in this post and assume that I host example.com publicly in Route 53. To learn more about creating and hosting a zone publicly in Route 53, see Working with Public Hosted Zones.
  2. You must choose a name to be associated with the secure app. In this case, I use secure.example.com as the DNS name to be associated with the secure app. This means that I’m trying to create an Elastic Beanstalk application whose URL will be https://secure.example.com/.
  3. You must have a TLS certificate hosted in AWS Certificate Manager (ACM). This certificate must be issued with the name you decided in Step 2. If you are new to ACM, see Getting Started. If you are already familiar with ACM, request a certificate and get its Amazon Resource Name (ARN).Look up the ARN for the certificate that you created by opening the ACM console. The ARN looks something like: arn:aws:acm:eu-west-1:111122223333:certificate/12345678-abcd-1234-abcd-1234abcd1234.

Implementing the solution

You can use two approaches to control the TLS ciphers used by your load balancer: one is to use a predefined protocol policy from AWS, and the other is to write your own protocol policy that lists exactly which ciphers should be enabled. There are many ciphers and options that can be set, so the appropriate AWS predefined policy is often the simplest policy to use. If you have to comply with an information security policy that requires enabling or disabling specific ciphers, you will probably find it easiest to write a custom policy listing only the ciphers that are acceptable to your requirements.

AWS released two predefined TLS policies on March 10, 2017: ELBSecurityPolicy-TLS-1-1-2017-01 and ELBSecurityPolicy-TLS-1-2-2017-01. These policies restrict TLS negotiations to TLS 1.1 and 1.2, respectively. You can find a good comparison of the ciphers that these policies enable and disable in the HTTPS listener documentation for Elastic Load Balancing. If your requirements are simply “support TLS 1.1 and later” or “support TLS 1.2 and later,” those AWS predefined cipher policies are the best place to start. If you need to control your cipher choice with a custom policy, I show you in this post which lines of the CloudFormation template to change.

Download the predefined policy CloudFormation template

Many AWS customers rely on CloudFormation to launch their AWS resources, including their Elastic Beanstalk applications. To change the ciphers and protocol versions supported on your load balancer, you must put those options in a CloudFormation template. You can store your site’s TLS certificate in ACM and create the corresponding DNS alias record in the correct zone in Route 53.

To start, download the CloudFormation template that I have provided for this blog post, or deploy the template directly in your environment. This template creates a CloudFormation stack in your default VPC that contains two resources: an Elastic Beanstalk application that deploys a standard sample PHP application, and a Route 53 record in a hosted zone. This CloudFormation template selects the AWS predefined policy called ELBSecurityPolicy-TLS-1-2-2017-01 and deploys it.

Launching the sample application from the CloudFormation console

In the CloudFormation console, choose Create Stack. You can either upload the template through your browser, or load the template into an Amazon S3 bucket and type the S3 URL in the Specify an Amazon S3 template URL box.

After you click Next, you will see that there are three parameters defined: CertificateARN, ELBHostName, and HostedDomainName. Set the CertificateARN parameter to the ARN of the certificate you want to use for your application. Set the ELBHostName parameter to the hostname part of the URL. For example, if your URL were https://secure.example.com/, the HostedDomainName parameter would be example.com and the ELBHostName parameter would be secure.

For the sample application, choose Next and then choose Create, and the CloudFormation stack will be created. For your own applications, you might need to set other options such as a database, VPC options, or Amazon SNS notifications. For more details, see AWS Elastic Beanstalk Environment Configuration. To deploy an application other than our sample PHP application, create your own application source bundle.

Launching the sample application from the command line

In addition to launching the sample application from the console, you can specify the parameters from the command line. Because the template uses parameters, you can launch multiple copies of the application, specifying different parameters for each copy. To launch the application from a Linux command line with the AWS CLI, insert the correct values for your application, as shown in the following command.

aws cloudformation create-stack --stack-name "SecureSampleApplication" \
--template-url https://<URL of your CloudFormation template in S3> \
--parameters ParameterKey=CertificateARN,ParameterValue=<Your ARN> \
ParameterKey=ELBHostName,ParameterValue=<Your Host Name> \
ParameterKey=HostedDomainName,ParameterValue=<Your Domain Name>

When that command exits, it prints the StackID of the stack it created. Save that StackID for later so that you can fetch the stack’s outputs from the command line.

Using a custom cipher specification

If you want to specify your own cipher choices, you can use the same CloudFormation template and change two lines. Let’s assume your information security policies require you to disable any ciphers that use Cipher Block Chaining (CBC) mode encryption. These ciphers are enabled in the ELBSecurityPolicy-TLS-1-2-2017-01 managed policy, so to satisfy that security requirement, you have to modify the CloudFormation template to use your own protocol policy.

In the template, locate the three lines that define the TLSHighPolicy.

- Namespace:  aws:elb:policies:TLSHighPolicy
OptionName: SSLReferencePolicy
Value:      ELBSecurityPolicy-TLS-1-2-2017-01

Change the OptionName and Value for the TLSHighPolicy. Instead of referring to the AWS predefined policy by name, explicitly list all the ciphers you want to use. Change those three lines so they look like the following.

- Namespace: aws:elb:policies:TLSHighPolicy
OptionName: SSLProtocols
Value:  Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-GCM-SHA256

This protocol policy stipulates that the load balancer should:

  • Negotiate connections using only TLS 1.2.
  • Ignore any attempts by the client (for example, the web browser or mobile device) to negotiate a weaker cipher.
  • Accept four specific, strong combinations of cipher and key exchange—and nothing else.

The protocol policy enables only TLS 1.2, strong ciphers that do not use CBC mode encryption, and strong key exchange.

Connect to the secure application

When your CloudFormation stack is in the CREATE_COMPLETED state, you will find three outputs:

  1. The public DNS name of the load balancer
  2. The secure URL that was created
  3. TestOnSSLLabs output that contains a direct link for testing your configuration

You can either enter the secure URL in a web browser (for example, https://secure.example.com/), or click the link in the Outputs to open your sample application and see the demo page. Note that you must use HTTPS—this template has disabled HTTP on port 80 and only listens with HTTPS on port 443.

If you launched your application through the command line, you can view the CloudFormation outputs using the command line as well. You need to know the StackId of the stack you launched and insert it in the following stack-name parameter.

aws cloudformation describe-stacks --stack-name "<ARN of Your Stack>" \
--query 'Stacks[0].Outputs'

Test your application over the Internet with SSLLabs

The easiest way to confirm that the load balancer is using the secure ciphers that we chose is to enter the URL of the load balancer in the form on SSL Labs’ SSL Server Test page. If you do not want the name of your load balancer to be shared publicly on SSLLabs.com, select the Do not show the results on the boards check box. After a minute or two of testing, SSLLabs gives you a detailed report of every cipher it tried and how your load balancer responded. This test simulates many devices that might connect to your website, including mobile phones, desktop web browsers, and software libraries such as Java and OpenSSL. The report tells you whether these clients would be able to connect to your application successfully.

Assuming all went well, you should receive an A grade for the sample application. The biggest contributors to the A grade are:

  • Supporting only TLS 1.2, and not TLS 1.1, TLS 1.0, or SSL 3.0
  • Supporting only strong ciphers such as AES, and not weaker ciphers such as RC4
  • Having an X.509 public key certificate issued correctly by ACM

How to test your application privately with sslscan

You might not be able to reach your Elastic Beanstalk application from the Internet because it might be in a private subnet that is only accessible internally. If you want to test the security of your load balancer’s configuration privately, you can use one of the open source command-line tools such as sslscan. You can install and run the sslscan command on any Amazon EC2 Linux instance or even from your own laptop. Be sure that the Elastic Beanstalk application you want to test will accept an HTTPS connection from your Amazon Linux EC2 instance or from your laptop.

The easiest way to get sslscan on an Amazon Linux EC2 instance is to:

  1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.
  2. Run sudo yum install sslscan.
  3. After the command runs successfully, run sslscan secure.example.com to scan your application for supported ciphers.

The results are similar to Qualys’ results at SSLLabs.com, but the sslscan tool does not summarize and evaluate the results to assign a grade. It just reports whether your application accepted a connection using the cipher that it tried. You must decide for yourself whether that set of accepted connections represents the right level of security for your application. If you have been asked to build a secure load balancer that meets specific security requirements, the output from sslscan helps to show how the security of your application is configured.

The following sample output shows a small subset of the total output of the sslscan tool.

Accepted TLS12 256 bits AES256-GCM-SHA384
Accepted TLS12 256 bits AES256-SHA256
Accepted TLS12 256 bits AES256-SHA
Rejected TLS12 256 bits CAMELLIA256-SHA
Failed TLS12 256 bits PSK-AES256-CBC-SHA
Rejected TLS12 128 bits ECDHE-RSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-ECDSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-RSA-AES128-SHA256

An Accepted connection is one that was successful: the load balancer and the client were both able to use the indicated cipher. Failed and Rejected connections are connections whose load balancer would not accept the level of security that the client was requesting. As a result, the load balancer closed the connection instead of communicating insecurely. The difference between Failed and Rejected is based one whether the TLS connection was closed cleanly.

Comparing the two policies

The main difference between our custom policy and the AWS predefined policy is whether or not CBC ciphers are accepted. The test results with both policies are identical except for the results shown in the following table. The only change in the policy, and therefore the only change in the results, is that the cipher suites using CBC ciphers have been disabled.

Cipher Suite Name Encryption Algorithm Key Size (bits) ELBSecurityPolicy-TLS-1-2-2017-01 Custom Policy
ECDHE-RSA-AES256-GCM-SHA384 AESGCM 256 Enabled Enabled
ECDHE-RSA-AES256-SHA384 AES 256 Enabled Disabled
AES256-GCM-SHA384 AESGCM 256 Enabled Disabled
AES256-SHA256 AES 256 Enabled Disabled
ECDHE-RSA-AES128-GCM-SHA256 AESGCM 128 Enabled Enabled
ECDHE-RSA-AES128-SHA256 AES 128 Enabled Disabled
AES128-GCM-SHA256 AESGCM 128 Enabled Disabled
AES128-SHA256 AES 128 Enabled Disabled

Strong ciphers and compatibility

The custom policy described in the previous section prevents legacy devices and older versions of software and web browsers from connecting. The output at SSLLabs provides a list of devices and applications (such as Internet Explorer 10 on Windows 7) that cannot connect to an application that uses the TLS policy. By design, the load balancer will refuse to connect to a device that is unable to negotiate a connection at the required levels of security. Users who use legacy software and devices will see different errors, depending on which device or software they use (for example, Internet Explorer on Windows, Chrome on Android, or a legacy mobile application). The error messages will be some variation of “connection failed” because the Elastic Load Balancer closes the connection without responding to the user’s request. This behavior can be problematic for websites that must be accessible to older desktop operating systems or older mobile devices.

If you need to support legacy devices, adjust the TLSHighPolicy in the CloudFormation template. For example, if you need to support web browsers on Windows 7 systems (and you cannot enable TLS 1.2 support on those systems), you can change the policy to enable TLS 1.1. To do this, change the value of SSLReferencePolicy to ELBSecurityPolicy-TLS-1-1-2017-01.

Enabling legacy protocol versions such as TLS version 1.1 will allow older devices to connect, but then the application may not be compliant with the information security policies or business requirements that require strong ciphers.

Conclusion

Using Elastic Beanstalk, Route 53, and ACM can help you launch secure applications that are designed to not only protect data but also meet regulatory compliance requirements and your information security policies. The TLS policy, either custom or predefined, allows you to control exactly which cryptographic ciphers are enabled on your Elastic Load Balancer. The TLS test results provide you with clear evidence you can use to demonstrate compliance with security policies or requirements. The parameters in this post’s CloudFormation template also make it adaptable and reusable for multiple applications. You can use the same template to launch different applications on different secure URLs by simply changing the parameters that you pass to the template.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudFormation forum.

– Paco

AFL experiments, or please eat your brötli

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/04/afl-experiments-or-please-eat-your.html

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say,
having the fuzzer spontaneously synthesize JPEG files,
come up with non-trivial XML syntax,
or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a
bunch of fairly rudimentary flaws in common bignum libraries,
with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in
IJG jpeg and other widely-used image processing libraries, leaking
data across web origins.

In one of my recent experiments, I decided to fuzz
brotli, an innovative compression library used in Chrome. But since it’s been
already fuzzed for many CPU-years, I wanted to do it with a twist:
stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful
target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to
accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth
poking with a stick every now and then.

In this case, the library held up admirably – spare for a handful of computationally intensive plaintext inputs
(that are now easy to spot due to the recent improvements to AFL).
But the output corpus synthesized by AFL, after being seeded just with a single file containing just “0”, featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML:
    <META HTTP-AAA IDEAAAA,
    DATA="IIA DATA="IIA DATA="IIADATA="IIA,
    </TD>.

  • Non-trivial numerical constants:
    1000,1000,0000000e+000000,
    0,000 0,000 0,0000 0x600,
    0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences:
    them with them m with them with themselves,
    in the fix the in the pin th in the tin,
    amassize the the in the in the inhe@massive in,
    he the themes where there the where there,
    size at size at the tie.

  • Bogus but semi-legible URLs:
    CcCdc.com/.com/m/ /00.com/.com/m/ /00(0(000000CcCdc.com/.com/.com

  • Snippets of Lisp code:
    )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))).

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords – and then put them together in various semi-interesting ways. Not bad.

Amazon Lex – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lex-now-generally-available/

During AWS re:Invent I showed you how you could use Amazon Lex to build conversational voice & text interfaces. At that time we launched Amazon Lex in preview form and invited developers to sign up for access. Powered by the same deep learning technologies that drive Amazon Alexa, Amazon Lex allows you to build web & mobile applications that support engaging, lifelike interactions.

Today I am happy to announce that we are making Amazon Lex generally available, and that you can start using it today! Here are some of the features that we added during the preview:

Slack Integration – You can now create Amazon Lex bots that respond to messages and events sent to a Slack channel. Click on the Channels tab of your bot, select Slack, and fill in the form to get a callback URL for use with Slack:

Follow the tutorial (Integrating an Amazon Lex Bot with Slack) to see how to do this yourself.

Twilio Integration – You can now create Amazon Lex bots that respond to SMS messages sent to a Twilio SMS number. Again, you simply click on Channels, select Twilio, and fill in the form:

To learn more, read Integrating an Amazon Lex Bot with Twilio SMS.

SDK Support – You can now use the AWS SDKs to build iOS, Android, Java, JavaScript, Python, .Net, Ruby, PHP, Go, and C++ bots that span mobile, web, desktop, and IoT platforms and interact using either text or speech. The SDKs also support the build process for bots; you can programmatically add sample utterances, create slots, add slot values, and so forth. You can also manage the entire build, test, and deployment process programmatically.

Voice Input on Test Console – The Amazon Lex test console now supports voice input when used on the Chrome browser. Simply click on the microphone:

Utterance Monitoring – Amazon Lex now records utterances that were not recognized by your bot, otherwise known as missed utterances. You can review the list and add the relevant ones to your bot:

You can also watch the following CloudWatch metrics to get a better sense of how your users are interacting with your bot. Over time, as you add additional utterances and improve your bot in other ways, the metrics should be on the decline.

  • Text Missed Utterances (PostText)
  • Text Missed Utterances (PostContent)
  • Speech Missed Utterances

Easy Association of Slots with Utterances – You can now highlight text in the sample utterances in order to identify slots and add values to slot types:

Improved IAM Support – Amazon Lex permissions are now configured automatically from the console; you can now create bots without having to create your own policies.

Preview Response Cards – You can now view a preview of the response cards in the console:

To learn more, read about Using a Response Card.

Go For It
Pricing is based on the number of text and voice responses processed by your application; see the Amazon Lex Pricing page for more info.

I am really looking forward to seeing some awesome bots in action! Build something cool and let me know what you come up with.

Jeff;

 

Sense HAT Emulator Upgrade

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/sense-hat-emulator-upgrade/

Last year, we partnered with Trinket to develop a web-based emulator for the Sense HAT, the multipurpose add-on board for the Raspberry Pi. Today, we are proud to announce an exciting new upgrade to the emulator. We hope this will make it even easier for you to design amazing experiments with the Sense HAT!

What’s new?

The original release of the emulator didn’t fully support all of the Sense HAT features. Specifically, the movement sensors were not emulated. Thanks to funding from the UK Space Agency, we are delighted to announce that a new round of development has just been completed. From today, the movement sensors are fully supported. The emulator also comes with a shiny new 3D interface, Astro Pi skin mode, and Pygame event handling. Click the ▶︎ button below to see what’s new!

Upgraded sensors

On a physical Sense HAT, real sensors react to changes in environmental conditions like fluctuations in temperature or humidity. The emulator has sliders which are designed to simulate this. However, emulating the movement sensor is a bit more complicated. The upgrade introduces a 3D slider, which is essentially a model of the Sense HAT that you can move with your mouse. Moving the model affects the readings provided by the accelerometer, gyroscope, and magnetometer sensors.

Code written in this emulator is directly portable to a physical Raspberry Pi and Sense HAT without modification. This means you can now develop and test programs using the movement sensors from any internet-connected computer, anywhere in the world.

Astro Pi mode

Astro Pi is our series of competitions offering students the chance to have their code run in space! The code is run on two space-hardened Raspberry Pi units, with attached Sense HATs, on the International Space Station.

Image of Astro Pi unit Sense HAT emulator upgrade

Astro Pi skin mode

There are a number of practical things that can catch you out when you are porting your Sense HAT code to an Astro Pi unit, though, such as the orientation of the screen and joystick. Just as having a 3D-printed Astro Pi case enables you to discover and overcome these, so does the Astro Pi skin mode in this emulator. In the bottom right-hand panel, there is an Astro Pi button which enables the mode: click it again to go back to the Sense HAT.

The joystick and push buttons are operated by pressing your keyboard keys: use the cursor keys and Enter for the joystick, and U, D, L, R, A, and B for the buttons.

Sense Hat resources for Code Clubs

Image of gallery of Code Club Sense HAT projects Sense HAT emulator upgrade

Click the image to visit the Code Club projects page

We also have a new range of Code Club resources which are based on the emulator. Of these, three use the environmental sensors and two use the movement sensors. The resources are an ideal way for any Code Club to get into physical computing.

The technology

The 3D models in the emulator are represented entirely with HTML and CSS. “This project pushed the Trinket team, and the 3D web, to its limit,” says Elliott Hauser, CEO of Trinket. “Our first step was to test whether pure 3D HTML/CSS was feasible, using Julian Garnier’s Tridiv.”

Sense HAT 3D image mockup Sense HAT emulator upgrade

The Trinket team’s preliminary 3D model of the Sense HAT

“We added JavaScript rotation logic and the proof of concept worked!” Elliot continues. “Countless iterations, SVG textures, and pixel-pushing tweaks later, the finished emulator is far more than the sum of its parts.”

Sense HAT emulator 3d image final version Sense HAT emulator upgrade

The finished Sense HAT model: doesn’t it look amazing?

Check out this blog post from Trinket for more on the technology and mathematics behind the models.

One of the compromises we’ve had to make is browser support. Unfortunately, browsers like Firefox and Microsoft Edge don’t fully support this technology yet. Instead, we recommend that you use Chrome, Safari, or Opera to access the emulator.

Where do I start?

If you’re new to the Sense HAT, you can simply copy and paste many of the code examples from our educational resources, like this one. Alternatively, you can check out our Sense HAT Essentials e-book. For a complete list of all the functions you can use, have a look at the Sense HAT API reference here.

The post Sense HAT Emulator Upgrade appeared first on Raspberry Pi.

LastPass Leaking Passwords Via Chrome Extension

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/WF2NBIoXu7o/

LastPass Leaking Passwords is not new, last week its Firefox extension was picked apart – now this week it’s Chrome extension is giving up its goodies. I’ve always found LastPass a bit suspect, even though they are super easy to use, and have a nice UI they’ve had TOO many serious security issues for a […]

The post LastPass Leaking…

Read the full post at darknet.org.uk

TV Time Machine

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tv-time-machine/

Back when home television sets were thin on the ground and programmes were monochrome, TV maintained a magical aura, a ‘how do they fit the people in that little box’ wonder which has been lost now that sets are common and almost everyone has their own video camera or recording device. Many older shows were filmed specifically to be watched in black and white, and, in much the same way that plugging your SNES into an HD monitor doesn’t quite look right, old classics just don’t look the same when viewed on the modern screen.

1954 brochure advert for Admiral TV sets

50’s televisions were so pretty. So, so pretty.

Wellington Duraes, Senior Program Manager for Microsoft and proud owner of one of the best names I’ve ever seen, has used a Raspberry Pi and some readily available television content to build a TV Time Machine that draws us back to the days of classic, monochrome viewing the best way he can.

He may not be able to utilise the exact technology of the old screen, but he can trick our mind with the set’s retro aesthetics.

TV Time Machine

You can see more information about this project here: https://www.hackster.io/wellington-duraes/tv-time-machine-d11b5f

As explained in his hackster.io project page, Wellington joined his local Maker community, the Snohomish County Makers in Everett, WA, who helped him to build the wooden enclosure for the television. By purchasing turquoise speaker grille fabric online, he was able to give a gorgeous retro feeling to the outer shell.

Wellington TV Time Machine

Wellington: “I can’t really keep it on close to me because I’ll stop working to watch…”

For the innards, Wellington used a cannibalised thrift store Dell monitor, hooking it up to a Raspberry Pi 2 and some second-hand speakers. After the addition of Adafruit’s video looper code to loop free content downloaded from the Internet Archive, plus some 3D-printed channel and volume knobs, the TV Time Machine was complete.

Wellington TV Time Machine Raspberry Pi inside view

The innards of the TV Time Machine

“Electronics are the easiest part,” explains Wellington. “This is basically a Raspberry Pi 2 playing videos in an infinite loop from a flash drive, a monitor, and a PC speaker.”

On a personal note, my first – and favourite – television was a black-and-white set, the remote long since lost. A hand-me-down from my parents’ bedroom, I remember watching the launch of Euro Disney on its tiny screen, imagining what the fireworks and parade would look like in colour. Of course, I could have just gone downstairs and watched it on the colour television in the living room, but there was something special about having my own screen whose content I could dictate.

euro disney opening logo

For anyone too young to remember the resort’s original name.

On weekend mornings, I would wake and give up my rights to colour content in order to watch Teenage Mutant Ninja Turtles, Defenders of the Earth, and The Wuzzles (my favourite) on that black-and-white screen, knowing that no one would ask for the channel to be changed – what eight-year-old child wanted to watch boring things like the news and weather?

The Wuzzles theme

intro

I think that’s why I love this project so much, and why, despite now owning a ridiculously large smart TV with all the bells and whistles of modern technology, I want to build this for the nostalgia kick.

The post TV Time Machine appeared first on Raspberry Pi.

Skillz: editing a web page

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/skillz-editing-web-page.html

So one of the skillz you ought to have in cybersec is messing with web-pages client-side using Chrome’s Developer Tools. Web-servers give you a bunch of HTML and JavaScript code which, once it reaches your browser, is yours to change and play with. You can do a lot with web-sites that they don’t intend by changing that code.

Let me give you an example. It’s only an example — touching briefly on steps to give you an impression what’s going on. It’s not a ground up explanation of everything, which you may find off-putting. Click on the images to expand them so you can see fully what’s going on.

Today is the American holiday called “Presidents Day”. It’s actually not a federal holiday, but a holiday in all 50 states. Originally it was just Washington’s birthday (February 22), but some states choose to honor other presidents as well, hence “Presidents Day”.
Thus of us who donated to Donald Trump’s campaign (note: I donated to all candidates campaigns back in 2015) received an email today suggesting that to honor Presidents Day, we should “sign a card” for Trump. It’s a gross dis-honoring of the Presidents the day is supposed to commemorate, but whatever, it’s the 21st century.
Okay, let’s say we want to honor the current President with a bunch of 🖕🖕🖕🖕 in order to point out his crassness of exploiting this holiday, and clicked on the URL [*], and filled it in as such (with multiple skin tones for the middle finger, just so he knows its from all of us):
Okay, now we hit the submit button “Add My Name” in order to send this to his campaign. The only problem is, the web page rejects us, telling us “Please enter a valid name” (note, I’m changing font sizes in these screen shots so you can see the message):
This is obviously client side validation of the field. It’s at this point that we go into Developer Tools in order to turn it off. One way is to [right-click] on that button, and from the popup menu, select “Inspect”, which gets you this screen (yes, the original page is squashed to the left-hand side):
We can edit the HTML right there and add the “novalidate” flag, as shown below, then hit the “Add My Name” button again:
This doesn’t work. The scripts on the webpage aren’t honoring the HTML5 “novalidate” flag. Therefore, we’ll have to go edit those scripts. We do that by clicking on the Sources tab, then press [ctrl-shift-f] to open the ‘find’ window in the sources, and type “Please enter a valid name”, and you’ll find the JavaScript source file (validation.js) where the validation function is located:
If at this point you find all these windows bewildering, then yes, you are on the right track. We typed in the search there near the bottom next to the classic search icon 🔍. Then right below that we got the search results. We clicked on the search results, then up above popped up the source file (validation.js) among all the possible source files with the line selected that contains our search term. Remember: when you pull down a single HTML page, like the one from donaldtrump.com, it can pull in a zillion JavaScript files as well.
Unlike the HTML, we can’t change the JavaScript on the fly (at least, I don’t know how to). Instead, we have to run more JavaScript. Specifically, we need to run a script that registers a new validation function. If you look in the original source, it contains a function that validates the input by making sure it matches a regular expression:
  1. jQuery.validator.addMethod(“isname”, function(value, element) {
  2.     return this.optional(element) || (/^[a-zA-Z]+[ ]+(([‘,. -][a-zA-Z ])?[a-zA-Z]*)+.?$/.test(value.trim()));
  3. }, “Please enter a valid name”);
From the console, we are going to call the addMethod function ourselves to register a different validation function for isname, specifically a validation function that always returns true, meaning the input is valid. This will override the previously registered function. As the Founders of our country say, the solution to bad JavaScript is not to censor it, but to add more JavaScript.
  1. jQuery.validator.addMethod(“isname”, function () {
  2.     return true});
We just type that in the Console as shown below (in the bottom window where Search used to be) and hit [enter]. It gives us the response “undefined”, but that’s OK. (Note: in the screenshot I misspelled it as isName, it should instead be all lowercase isname).
Now we can close Developer Tools and press the “Add My Name” button, and we get the following response:
Darn, foiled again. But at least this time, our request went to the server. It was on the server side that the request was rejected. We successfully turned off client-side checking. Had the server accepted our Unicode emoji, we would’ve reached the next step, where it asks for donations. (By the way, the entire purpose of “sign this card” is to get users to donate, nothing else).

Conclusion

So we didn’t actually succeed at doing anything here, but I thought I’d write it up anyway. Editing the web-page client-side, or mucking around with JavaScript client-side, is a skill that every cybersec professional should have. Hopefully, this is an amusing enough example that people will follow the steps to see how this is done.

Dear Obama, From Infosec

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/dear-obama-from-infosec.html

Dear President Obama:

We are more than willing to believe Russia was responsible for the hacked emails/records that influenced our election. We believe Russian hackers were involved. Even if these hackers weren’t under the direct command of Putin, we know he could put a stop to such hacking if he chose. It’s like harassment of journalists and diplomats. Putin encourages a culture of thuggery that attacks opposition, without his personal direction, but with his tacit approval.

Your lame attempts to convince us of what we already agree with has irretrievably damaged your message.

Instead of communicating with the America people, you worked through your typical system of propaganda, such as stories in the New York Times quoting unnamed “senior government officials”. We don’t want “unnamed” officials — we want named officials (namely you) who we can pin down and question. When you work through this system of official leaks, we believe you have something to hide, that the evidence won’t stand on its own.

We still don’t believe the CIA’s conclusions because we don’t know, precisely, what those conclusions are. Are they derived purely from companies like FireEye and CrowdStrike based on digital forensics? Or do you have spies in Russian hacker communities that give better information? This is such an important issue that it’s worth degrading sources of information in order to tell us, the American public, the truth.

You had the DHS and US-CERT issue the “GRIZZLY-STEPPE”[*] report “attributing those compromises to Russian malicious cyber activity“. It does nothing of the sort. It’s full of garbage. It contains signatures of viruses that are publicly available, used by hackers around the world, not just Russia. It contains a long list of IP addresses from perfectly normal services, like Tor, Google, Dropbox, Yahoo, and so forth.

Yes, hackers use Yahoo for phishing and malvertising. It doesn’t mean every access of Yahoo is an “Indicator of Compromise”.

For example, I checked my web browser [chrome://net-internals/#dns] and found that last year on November 20th, it accessed two IP addresses that are on the Grizzley-Steppe list:

No, this doesn’t mean I’ve been hacked. It means I just had a normal interaction with Yahoo. It means the Grizzley-Steppe IoCs are garbage.

If your intent was to show technical information to experts to confirm Russia’s involvement, you’ve done the precise opposite. Grizzley-Steppe proves such enormous incompetence that we doubt all the technical details you might have. I mean, it’s possible that you classified the important details and de-classified the junk, but even then, that junk isn’t worth publishing. There’s no excuse for those Yahoo addresses to be in there, or the numerous other problems.

Among the consequences is that Washington Post story claiming Russians hacked into the Vermont power grid. What really happened is that somebody just checked their Yahoo email, thereby accessing one of the same IP addresses I did. How they get from the facts (one person accessed Yahoo email) to the story (Russians hacked power grid) is your responsibility. This misinformation is your fault.

You announced sanctions for the Russian hacking [*]. At the same time, you announced sanctions for Russian harassment of diplomatic staff. These two events are confused in the press, with most stories reporting you expelled 35 diplomats for hacking, when that appears not to be the case.

Your list of individuals/organizations is confusing. It makes sense to name the GRU, FSB, and their officers. But why name “ZorSecurity” but not sole proprietor “Alisa Esage Shevchenko”? It seems a minor target, and you give no information why it was selected. Conversely, you ignore the APT28/APT29 Dukes/CozyBear groups that feature so prominently in your official leaks. You also throw in a couple extra hackers, for finance hacks rather than election hacks. Again, this causes confusion in the press about exactly who you are sanctioning and why. It seems as slipshod as the DHS/US-CERT report.

Mr President, you’ve got two weeks left in office. Russia’s involvement is a huge issue, especially given President-Elect Trump’s pro-Russia stance. If you’ve got better information than this, I beg you to release it. As it stands now, all you’ve done is support Trump’s narrative, making this look like propaganda — and bad propaganda at that. Give us, the infosec/cybersec community, technical details we can look at, analyze, and confirm.

Regards,
Infosec

[$] GStreamer and the state of Linux desktop security

Post Syndicated from jake original http://lwn.net/Articles/708196/rss

Recently Chris
Evans
, an IT security expert currently working for Tesla, published a
series of blog posts about security vulnerabilities in the GStreamer
multimedia framework. A combination of the Chrome browser and GNOME-based
desktops creates a particularly
scary vulnerability
. Evans also made a provocative statement: that
vulnerabilities of this severity currently wouldn’t happen in
Windows 10. Is the state of security on the Linux desktop really that
bad — and what can be done about it?

Subscribers can click below for the full story from this week’s edition.

Amazon AppStream 2.0 – Stream Desktop Apps from AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-appstream-2-0-stream-desktop-apps-from-aws/

My colleague Gene Farrell wrote the guest post below to tell you how the original vision for Amazon AppStream evolved in the face of customer feedback.

Jeff;


At AWS, helping our customers solve problems and serve their customers with technology is our mission. It drives our thinking, and it’s at the center of how we innovate. Our customers use services from AWS to build next-generation mobile apps, create delightful web experiences, and even run their core IT workloads, all at global scale.

While we have seen tremendous innovation and transformation in mobile, web, and core IT, relatively little has changed with desktops and desktop applications. End users don’t yet enjoy freedom in where and how they work; IT is stuck with rigid and expensive systems to manage desktops, applications, and a myriad of devices; and securing company information is harder than ever. In many ways, the cloud seems to have bypassed this aspect of IT.

Our customers want to change that. They want the same benefits of flexibility, scale, security, performance, and cost for desktops and applications as they’re seeing with mobile, web, and core IT. A little over two years ago, we introduced Amazon WorkSpaces, a fully managed, secure cloud desktop service that provides a persistent desktop running on AWS. Today, I am excited to introduce you to Amazon AppStream 2.0, a fully managed, secure application streaming service for delivering your desktop apps to web browsers.

Customers have told us that they have many traditional desktop applications that need to work on multiple platforms. Maintaining these applications is complicated and expensive, and customers are looking for a better solution. With AppStream 2.0, you can provide instant access to desktop applications using a web browser on any device, by streaming them from AWS. You don’t need to rewrite your applications for the cloud, and you only need to maintain a single version. Your applications and data remain secure on AWS, and the application stream is encrypted end to end.

Looking back at the original AppStream
Before I get into more details about AppStream 2.0, it’s worth looking at the history of the original Amazon AppStream service. We launched AppStream in 2013 as an SDK-based service that customers could use to build streaming experiences for their desktop apps, and move these apps to the cloud. We believed that the SDK approach would enable customers to integrate application streaming into their products. We thought game developers and graphics ISVs would embrace this development model, but it turns out it was more work than we anticipated, and required significant engineering investment to get started. Those who did try it, found that the feature set did not meet their needs. For example, AppStream only offered a single instance type based on the g2.2xlarge EC2 instance. This limited the service to high-end applications where performance would justify the cost. However, the economics didn’t make sense for a large number of applications.

With AppStream, we set out to solve a significant customer problem, but failed to get the solution right. This is a risk that we are willing to take at Amazon. We want to move quickly, explore areas where we can help customers, but be prepared for failure. When we fail, we learn and iterate fast. In this case, we continued to hear from customers that they needed a better solution for desktop applications, so we went back to the drawing board. The result is AppStream 2.0.

Benefits of AppStream 2.0
AppStream 2.0 addresses many of the concerns we heard from customers who tried the original AppStream service. Here are a few of the benefits:

  • Run desktop applications securely on any device in an HTML5 web browser on Windows and Linux PCs, Macs, and Chromebooks.
  • Instant-on access to desktop applications from wherever users are. There are no delays, no large files to download, and no time-consuming installations. Users get a responsive, fluid experience that is just like running natively installed apps.
  • Simple end user interface so users can run in full screen mode, open multiple applications within a browser tab, and easily switch and interact between them. You can upload files to a session, access and edit them, and download them when you’re done. You can also print, listen to audio, and adjust bandwidth to optimize for your network conditions.
  • Secure applications and data that remain on AWS – only encrypted pixels are streamed to end users. Application streams and user input flow through a secure streaming gateway on AWS over HTTPS, making them firewall friendly. Applications can run inside your own virtual private cloud (VPC), and you can use Amazon VPC security features to control access. AppStream 2.0 supports identity federation, which allows your users to access their applications using their corporate credentials.
  • Fully managed service, so you don’t need to plan, deploy, manage, or upgrade any application streaming infrastructure. AppStream 2.0 manages the AWS resources required to host and run your applications, scales automatically, and provides access to your end users on demand.
  • Consistent, scalable performance on AWS, with access to compute capabilities not typically available on local devices. You can instantly scale locally and globally, and ensure that your users always get a low-latency experience.
  • Multiple streaming instance types to run your applications. You can use instance types from the General Purpose, Compute Optimized, and Memory Optimized instance families to optimize application performance and reduce your overall costs.
  • NICE DCV for high-performance streaming provides secure, high-performance access to applications. NICE DCV delivers a fluid interactive experience, and automatically adjusts to network conditions.

Pricing & availability
With AppStream 2.0, you pay only for the streaming instances that you use, and a small monthly fee per authorized user. The charge for streaming instances depends on the instance type that you select, and the maximum number of concurrent users that will access their applications.

A user fee is charged per unique authorized user accessing applications in a region in any given month.  The user fee covers the Microsoft RDS SAL license, and may be waived if you bring your own RDS CAL licenses via Microsoft’s license mobility program. AppStream 2.0 offers a Free Tier, which provides an admin experience for getting started. The Free Tier includes 40 hours per month, for up to two months. For more information, see this page.

AppStream 2.0 is available today in US East (N. Virginia), US West (Oregon), Europe (Ireland), and AP-Northeast (Tokyo) Regions. You can try the AppStream 2.0 end user experience for free today, with no setup required, by accessing sample applications already installed on AppStream 2.0 To access the Try It Now experience, log in with your AWS account and choose an app to get started.

To learn more about AppStream 2.0, visit the AppStream page.

Gene Farrell, Vice President, AWS Enterprise Applications & EC2 Windows

In the Works – Amazon EC2 Elastic GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-work-amazon-ec2-elastic-gpus/

I have written about the benefits of GPU-based computing in the past, most recently as part of the launch of the P2 instances with up to 16 GPUs. As I have noted in the past, GPUs offer incredible power and scale, along with the potential to simultaneously decrease your time-to-results and your overall compute costs.

Today I would like to tell you a little bit about a new GPU-based feature that we are working on.  You will soon have the ability to add graphics acceleration to existing EC2 instance types. When you use G2 or P2 instances, the instance size determines the number of GPUs. While this works well for many types of applications, we believe that many other applications are now ready to take advantage of a newer and more flexible model.

Amazon EC2 Elastic GPUs
The upcoming Amazon EC2 Elastic GPUs give you the best of both worlds. You can choose the EC2 instance type and size that works best for your application and then indicate that you want to use an Elastic GPU when you launch the instance, and take your pick of four different sizes:

Name GPU Memory
eg1.medium 1 GiB
eg1.large 2 GiB
eg1.xlarge 4 GiB
eg1.2xlarge 8 GiB

Today, you have the ability to set up freshly created EBS volumes when you launch new instances. You’ll be able to do something similar with Elastic GPUs, specifying the desired size during the launch process, with the option to stop, modify, and then start a running instance in order to make a change.

Starting with OpenGL
Our Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We’ll start out with Windows support for Open GL, and plan to add support for the Amazon Linux AMI and other versions of OpenGL after that. We are also giving consideration to support for other 3D APIs including DirectX and Vulkan (let us know if these would be of interest to you). We will include the Amazon-optimized OpenGL library in upcoming revisions to the existing Microsoft Windows AMI.

OpenGL is great for rendering, but how do you see what’s been rendered? Great question! One option is to use the NICE Desktop Cloud Visualization (acquired earlier this year — Amazon Web Services to Acquire NICE) to stream the rendered content to any HTML5-compatible browser or device. This includes recent versions of Firefox and Chrome, along with all sorts of phones and tablets.

I believe that this unique combination of hardware and software will be a great host for all sorts of 3D visualization and technical computing applications. Two of our customers have already shared some of their feedback with us.

Ray Milhem (VP of Enterprise Solutions & Cloud) at ANSYS told us:

ANSYS Enterprise Cloud delivers a virtual simulation data center, optimized for AWS. It delivers a rich interactive graphics experience critical to supporting the end-to-end engineering simulation processes that allow our customers to deliver innovative product designs. With Elastic GPU, ANSYS will be able to more easily deliver this experience right-sized to the price and performance needs of our customers. We are certifying ANSYS applications to run on Elastic GPU to enable our customers to innovate more efficiently on the cloud.

Bob Haubrock (VP of NX Product Management) at Siemens PLM also had some nice things to say:

Elastic GPU is a game-changer for Computer Aided Design (CAD) in the cloud. With Elastic GPU, our customers can now run Siemens PLM NX on Amazon EC2 with professional-grade graphics, and take advantage of the flexibility, security, and global scale that AWS provides. Siemens PLM is excited to certify NX on the EC2 Elastic GPU platform to help our customers push the boundaries of Design & Engineering innovation.

New Certification Program
In order to help software vendors and developers to make sure that their applications take full advantage of  Elastic GPUs and our other GPU-based offerings, we are launching the AWS Graphics Certification Program today. This program offers credits and tools that will help to quickly and automatically test applications across the supported matrix of instance and GPU types.

Stay Tuned
As always, I will share additional information just as soon as it becomes available!

Jeff;

New – Web Access for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-web-access-for-amazon-workspaces/

We launched WorkSpaces in late 2013 (Amazon WorkSpaces – Desktop Computing in the Cloud) and have been adding new features at a rapid clip. Here are some highlights from 2016:

Today we are adding to this list with the addition of Amazon WorkSpaces Web Access. You can now access your WorkSpace from recent versions of Chrome or Firefox running on Windows, Mac OS X, or Linux. You can now be productive on heavily restricted networks and in situations where installing a WorkSpaces client is not an option. You don’t have to download or install anything, and you can use this from a public computer without leaving any private or cached data behind.

To use Amazon WorkSpaces Web Access, simply visit the registration page using a supported browser and enter the registration code for your WorkSpace:

Then log in with your user name and password:

And here you go (yes, this is IE and Firefox running on WorkSpaces, displayed in Chrome):

This feature is available for all new WorkSpaces and you can access it at no additional charge after your administrator enables it:

Existing WorkSpaces must be rebuilt and custom images must be refreshed in order to take advantage of Web Access.

Jeff;

 

New – GPU-Powered Amazon Graphics WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-amazon-graphics-workspaces/

As you can probably tell from my I Love My Amazon WorkSpace post I am kind of a fan-boy!

Since writing that post I have found out that I am not alone, and that there are many other WorkSpaces fan-boys and fan-girls out there. Many AWS customers are enjoying their fully managed, secure desktop computing environments almost as much as I am. From their perspective as users, they like to be able to access their WorkSpace from a multitude of supported devices including Windows and Mac computers, PCoIP Zero Clients, Chromebooks, iPads, Fire tablets, and Android tablets. As administrators, they appreciate the ability to deploy high-quality cloud desktops for any number of users. And, finally, as business leaders they like the ability to pay hourly or monthly for the WorkSpaces that they launch.

New Graphics Bundle
These fans already have access to several different hardware choices: the Value, Standard, and Performance bundles. With 1 or 2 vCPUs (virtual CPUs) and 2 to 7.5 GiB of memory, these bundles are a good fit for many office productivity use cases.

Today we are expanding the WorkSpaces family by adding a new GPU-powered Graphics bundle. This bundle offers a high-end virtual desktop that is a great fit for 3D application developers, 3D modelers, and engineers that use CAD, CAM, or CAE tools at the office. Here are the specs:

  • Display – NVIDIA GPU with 1,536 CUDA cores and 4 GiB of graphics memory.
  • Processing – 8 vCPUs.
  • Memory – 15 GiB.
  • System volume – 100 GB.
  • User volume – 100 GB.

This new bundle is available in all regions where WorkSpaces currently operates, and can be used with any of the devices that I mentioned above. You can run the license-included operating system (Windows Server 2008 with Windows 7 Desktop Experience), or you can bring your own licenses for Windows 7 or 10. Applications that make use of OpenGL 4.x, DirectX, CUDA, OpenCL, and the NVIDIA GRID SDK will be able to take advantage of the GPU.

As you start to think about your petabyte-scale data analysis and visualization, keep in mind that these instances are located just light-feet away from EC2, RDS, Amazon Redshift, S3, and Kinesis. You can do your compute-intensive analysis server-side, and then render it in a visually compelling way on an adjacent WorkSpace. I am highly confident that you can use this combination of AWS services to create compelling applications that would simply not be cost-effective or achievable in any other way.

There is one important difference between the Graphics Bundle and the other bundles. Due to the way that the underlying hardware operates, WorkSpaces that run this bundle do not save the local state (running applications and open documents) when used in conjunction with the AutoStop running mode that I described in my Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume post. We recommend saving open documents and closing applications before disconnecting from your WorkSpace or stepping away from it for an extended period of time.

Demo
I don’t build 3D applications or use CAD, CAM, or CAE tools. However, I do like to design and build cool things with LEGO® bricks! I fired up the latest version of LEGO Digital Designer (LDD) and spent some time enhancing a design. Although I was not equipped to do any benchmarks, the GPU-enhanced version definitely ran more quickly and produced a higher quality finished product. Here’s a little design study I’ve been working on:

With my design all set up it was time to start building. Instead of trying to re-position my monitor so that it would be visible from my building table, I simply logged in to my Graphics WorkSpace from my Fire tablet. I was able to scale and rotate my design very quickly, even though I had very modest local computing power. Here’s what I saw on my Fire:

As you can see, the two screens (desktop and Fire) look identical! I stepped over to my building table and was able to set things up so that I could see my design and find my bricks:

Pricing
Graphics WorkSpaces are available with an hourly billing option. You pay a small, fixed monthly fee to cover infrastructure costs and storage, and an hourly rate for each hour that the WorkSpace is used during the month. Prices start at $22/month + $1.75 per hour in the US East (Northern Virginia) Region; see the WorkSpaces Pricing page for more information.

Jeff

 

Pirate Bay Risks “Repeat Offender” Ban From Google

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-risks-repeat-offender-ban-google-161111/

warning5Google regularly checks websites for malicious and harmful content to help people avoid running into dangerous situations.

This safe browsing service is used by modern browsers such as Chrome, Firefox, and Safari, which throw up a warning before people attempt to visit risky sites.

Frequent users of The Pirate Bay are familiar with these ominous warning signs. The site has been flagged several times over the past few years and twice in recent weeks.

This issue is more common on pirate sites as these only have access to lower-tier advertising agencies, some of which have minimal screening procedures for ads.

Thus far the browser roadblocks have always disappeared after the rogue advertisements have gone away, but according to Google, the red flag can become more permanent in the future.

The company has announced that it has implemented a “repeat offender” policy to address sites that frequently run into these problems. This is to prevent sites from circumventing the security measures by turning malicious content off and on.

“Over time, we’ve observed that a small number of websites will cease harming users for long enough to have the warnings removed, and will then revert to harmful activity,” Google’s Safe Browsing Team writes.

“Safe Browsing will begin to classify these types of sites as ‘Repeat Offenders’,” the announcement adds.

Chrome’s Pirate Bay block

chromeharmtpb

The new policy will only affect sites that link to harmful content. So-called ‘hacked’ sites, which Google also warns about, are not part of these measures.

Under these new rules, The Pirate Bay is also at risk of being benched for 30 days if it’s caught more than once in a short period of time. The same applies to all other sites on the Internet of course.

TorrentFreak asked Google what the timeframe is for sites to get a repeat offender classification, but the company hasn’t yet replied.

The Pirate Bay team isn’t really concerned about the new policy. They stress that in their case, the issue lies with third-party advertisers which they have no control over.

“Tell Google to get an ad blocker?” TPB’s Spud17 notes.

“Seriously though, there aren’t a lot of ad agencies willing to work with sharing sites. The ones we have access to aren’t very concerned with what they put up, and don’t exactly give us a preview of what their clients send them before they air it.”

The TPB team doesn’t see their site as a repeat offender. However, for the ad agencies there’s a lot at stake so perhaps this measure will trigger them to be more vigilant.

“It’s infrequent enough, I don’t believe TPB will be flagged as a Repeat Offender. Ultimately, that will cost the ad agencies dearly if all their clients were permanently denied visitors.

“So maybe in the long run those agencies with a tendency to serve malicious ads will better screen their clients,” Spud17 adds.

Even if The Pirate Bay or other pirate sites get banned for thirty days, it’s not the end of the world. People can easily disable the malware checking option in their browser to regain direct access. That is, if they are willing to take the risk.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig