Tag Archives: japan

Don Jr.: I’ll bite

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/don-jr-ill-bite.html

So Don Jr. tweets the following, which is an excellent troll. So I thought I’d bite. The reason is I just got through debunk Democrat claims about NetNeutrality, so it seems like a good time to balance things out and debunk Trump nonsense.

The issue here is not which side is right. The issue here is whether you stand for truth, or whether you’ll seize any factoid that appears to support your side, regardless of the truthfulness of it. The ACLU obviously chose falsehoods, as I documented. In the following tweet, Don Jr. does the same.

It’s a preview of the hyperpartisan debates are you are likely to have across the dinner table tomorrow, which each side trying to outdo the other in the false-hoods they’ll claim.

What we see in this number is a steady trend of these statistics since the Great Recession, with no evidence in the graphs showing how Trump has influenced these numbers, one way or the other.

Stock markets at all time highs

This is true, but it’s obviously not due to Trump. The stock markers have been steadily rising since the Great Recession. Trump has done nothing substantive to change the market trajectory. Also, he hasn’t inspired the market to change it’s direction.
To be fair to Don Jr., we’ve all been crediting (or blaming) presidents for changes in the stock market despite the fact they have almost no influence over it. Presidents don’t run the economy, it’s an inappropriate conceit. The most influence they’ve had is in harming it.

Lowest jobless claims since 73

Again, let’s graph this:

As we can see, jobless claims have been on a smooth downward trajectory since the Great Recession. It’s difficult to see here how President Trump has influenced these numbers.

6 Trillion added to the economy

What he’s referring to is that assets have risen in value, like the stock market, homes, gold, and even Bitcoin.
But this is a well known fallacy known as Mercantilism, believing the “economy” is measured by the value of its assets. This was debunked by Adam Smith in his book “The Wealth of Nations“, where he showed instead the the “economy” is measured by how much it produces (GDP – Gross Domestic Product) and not assets.
GDP has grown at 3.0%, which is pretty good compared to the long term trend, and is better than Europe or Japan (though not as good as China). But Trump doesn’t deserve any credit for this — today’s rise in GDP is the result of stuff that happened years ago.
Assets have risen by $6 trillion, but that’s not a good thing. After all, when you sell your home for more money, the buyer has to pay more. So one person is better off and one is worse off, so the net effect is zero.
Actually, such asset price increase is a worrisome indicator — we are entering into bubble territory. It’s the result of a loose monetary policy, low interest rates and “quantitative easing” that was designed under the Obama administration to stimulate the economy. That’s why all assets are rising in value. Normally, a rise in one asset means a fall in another, like selling gold to pay for houses. But because of loose monetary policy, all assets are increasing in price. The amazing rise in Bitcoin over the last year is as much a result of this bubble growing in all assets as it is to an exuberant belief in Bitcoin.
When this bubble collapses, which may happen during Trump’s term, it’ll really be the Obama administration who is to blame. I mean, if Trump is willing to take credit for the asset price bubble now, I’m willing to give it to him, as long as he accepts the blame when it crashes.

1.5 million fewer people on food stamps

As you’d expect, I’m going to debunk this with a graph: the numbers have been falling since the great recession. Indeed, in the previous period under Obama, 1.9 fewer people got off food stamps, so Trump’s performance is slight ahead rather than behind Obama. Of course, neither president is really responsible.

Consumer confidence through the roof

Again we are going to graph this number:

Again we find nothing in the graph that suggests President Trump is responsible for any change — it’s been improving steadily since the Great Recession.

One thing to note is that, technically, it’s not “through the roof” — it still quite a bit below the roof set during the dot-com era.

Lowest Unemployment rate in 17 years

Again, let’s simply graph it over time and look for Trump’s contribution. as we can see, there doesn’t appear to be anything special Trump has done — unemployment has steadily been improving since the Great Recession.
But here’s the thing, the “unemployment rate” only measures those looking for work, not those who have given up. The number that concerns people more is the “labor force participation rate”. The Great Recession kicked a lot of workers out of the economy.
Mostly this is because Baby Boomer are now retiring an leaving the workforce, and some have chosen to retire early rather than look for another job. But there are still some other problems in our economy that cause this. President Trump has nothing particular in order to solve these problems.

Conclusion

As we see, Don Jr’s tweet is a troll. When we look at the graphs of these indicators going back to the Great Recession, we don’t see how President Trump has influenced anything. The improvements this year are in line with the improvements last year, which are in turn inline with the improvements in the previous year.
To be fair, all parties credit their President with improvements during their term. President Obama’s supporters did the same thing. But at least right now, with these numbers, we can see that there’s no merit to anything in Don Jr’s tweet.
The hyperpartisan rancor in this country is because neither side cares about the facts. We should care. We should care that these numbers suck, even if we are Republicans. Conversely, we should care that those NetNeutrality claims by Democrats suck, even if we are Democrats.

Cracking Group 3DM Loses Piracy Case Against Game Maker

Post Syndicated from Ernesto original https://torrentfreak.com/cracking-group-3dm-loses-piracy-case-against-game-maker-171115/

While most cracking groups operate under a veil of secrecy, China-based 3DM is not shy to come out in public.

The group’s leader, known as Bird Sister, has commented on various gaming and piracy related issues in the past.

She also spoke out when her own group was sued by the Japanese game manufacturer Koei Tecmo last year. The company accused 3DM of pirating several of its titles, including Romance of the Three Kingdoms.

However, Bird Sister instead wondered why the company should be able to profit from a work inspired by a 3rd-century novel from China.

“…why does a Japanese company, Koei have the copyright of this game when the game is obviously a derivation from the book “Romance of the Three Kingdoms” written by Chen Shou. I think Chinese gaming companies should try taking back the copyright,” she said.

Bird Sister

birdsister

The novel in question has long since been in the public domain so there’s nothing stopping Koei Tecmo from using it, as Kotaku points out. The game, however, is a copyrighted work and 3DM’s actions were seen as clear copyright infringement by a Chinese court.

In a press release, Koei Tecmo announces that it has won its lawsuit against the cracking group.

The court ordered 3DM to stop distributing the infringing games and awarded a total of 1.62 million Yuan ($245,000) in piracy damages and legal fees.

While computer games are cracked and pirated on a daily basis, those responsible for it are rarely held accountable. This makes the case against 3DM rather unique. And it may not be the last if it’s up to the game manufacturer.

“We will continue to respond rigorously to infringements of our copyrights and trademark rights, both in domestic and overseas markets, while also developing satisfying games that many users can enjoy,” said the company, commenting on the ruling.

While the lawsuit may help to steer the cracking group away from pirating Koei Tecmo games, it can’t undo any earlier releases. Court order or not, past 3DM releases, including Romance of the Three Kingdoms titles, are still widely available through third-party sites.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

98, 99, 100 CloudFront Points of Presence!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/98-99-100-cloudfront-points-of-presence/

Nine years ago I showed you how you could Distribute Your Content with Amazon CloudFront. We launched CloudFront in 2008 with 14 Points of Presence and have been expanding rapidly ever since. Today I am pleased to announce the opening of our 100th Point of Presence, the fifth one in Tokyo and the sixth in Japan. With 89 Edge Locations and 11 Regional Edge Caches, CloudFront now supports traffic generated by millions of viewers around the world.

23 Countries, 50 Cities, and Growing
Those 100 Points of Presence span the globe, with sites in 50 cities and 23 countries. In the past 12 months we have expanded the size of our network by about 58%, adding 37 Points of Presence, including nine in the following new cities:

  • Berlin, Germany
  • Minneapolis, Minnesota, USA
  • Prague, Czech Republic
  • Boston, Massachusetts, USA
  • Munich, Germany
  • Vienna, Austria
  • Kuala Lumpur, Malaysia
  • Philadelphia, Pennsylvania, USA
  • Zurich, Switzerland

We have even more in the works, including an Edge Location in the United Arab Emirates, currently planned for the first quarter of 2018.

Innovating for Our Customers
As I mentioned earlier, our network consists of a mix of Edge Locations and Regional Edge Caches. First announced at re:Invent 2016, the Regional Edge Caches sit between our Edge Locations and your origin servers, have even more memory than the Edge Locations, and allow us to store content close to the viewers for rapid delivery, all while reducing the load on the origin servers.

While locations are important, they are just a starting point. We continue to focus on security with the recent launch of our Security Policies feature and our announcement that CloudFront is a HIPAA-eligible service. We gave you more content-serving and content-generation options with the launch of [email protected], letting you run AWS Lambda functions close to your users.

We have also been working to accelerate the processing of cache invalidations and configuration changes. We now accept invalidations within milliseconds of the request and confirm that the request has been processed world-wide, typically within 60 seconds. This helps to ensure that your customers have access to fresh, timely content!

Visit our Getting Started with Amazon CloudFront page for sign-up information, tutorials, webinars, on-demand videos, office hours, and more.

Jeff;

 

Register for AWS re:Invent 2017 Live Streams

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/register-for-aws-reinvent-2017-live-streams/

AWS re:Invent 2017 live streams banner

If you cannot attend AWS re:Invent 2017 in person, you can still watch the two keynotes and Tuesday Night Live from wherever you are. We will live stream both keynotes with Andy Jassy, CEO of Amazon Web Services, and Werner Vogels, CTO of Amazon.com, as well as Tuesday Night Live with Peter DeSantis, VP of AWS Global Infrastructure. Note that the live streams will be in English only. The recordings will include captions for Japanese, Korean, and Simplified Chinese.

Register today for the AWS re:Invent 2017 live streams!

– Craig

Dialekt-o-maten vending machine

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/dialekt-o-maten-vending-machine/

At some point, many of you will have become exasperated with your AI personal assistant for not understanding you due to your accent – or worse, your fantastic regional dialect! A vending machine from Coca-Cola Sweden turns this issue inside out: the Dialekt-o-maten rewards users with a free soft drink for speaking in a Swedish regional dialect.

The world’s first vending machine where you pay with a dialect!

Thirsty fans along with journalists were invited to try Dialekt-o-maten at Stureplan in central Stockholm. Depending on how well they could pronounce the different phrases in assorted Swedish dialects – they were rewarded an ice cold Coke with that destination on the label.

The Dialekt-o-maten

The machine, which uses a Raspberry Pi, was set up in Stureplan Square in Stockholm. A person presses one of six buttons to choose the regional dialect they want to try out. They then hit ‘record’, and speak into the microphone. The recording is compared to a library of dialect samples, and, if it matches closely enough, voila! — the Dialekt-o-maten dispenses a soft drink for free.

Dialekt-o-maten on the highstreet in Stockholm

Code for the Dialekt-o-maten

The team of developers used the dejavu Python library, as well as custom-written code which responded to new recordings. Carl-Anders Svedberg, one of the developers, said:

Testing the voices and fine-tuning the right level of difficulty for the users was quite tricky. And we really should have had more voice samples. Filtering out noise from the surroundings, like cars and music, was also a small hurdle.

While they wrote the initial software on macOS, the team transferred it to a Raspberry Pi so they could install the hardware inside the Dialekt-o-maten.

Regional dialects

Even though Sweden has only ten million inhabitants, there are more than 100 Swedish dialects. In some areas of Sweden, the local language even still resembles Old Norse. The Dialekt-o-maten recorded how well people spoke the six dialects it used. Apparently, the hardest one to imitate is spoken in Vadstena, and the easiest is spoken in Smögen.

Dialekt-o-maten on Stockholm highstreet

Speech recognition with the Pi

Because of its audio input capabilities, the Raspberry Pi is very useful for building devices that use speech recognition software. One of our favourite projects in this vein is of course Allen Pan’s Real-Life Wizard Duel. We also think this pronunciation training machine by Japanese makers HomeMadeGarbage is really neat. Ideas from these projects and the Dialekt-o-maten could potentially be combined to make a fully fledged language-learning tool!

How about you? Have you used a Raspberry Pi to help you become multilingual? If so, do share your project with us in the comments or via social media.

The post Dialekt-o-maten vending machine appeared first on Raspberry Pi.

Piracy ‘Disaster’ Strikes The Hitman’s Bodyguard

Post Syndicated from Ernesto original https://torrentfreak.com/piracy-disaster-strikes-the-hitmans-bodyguard-170829/

The Hitman’s Bodyguard is an action comedy movie featuring Hollywood stars Samuel L. Jackson and Ryan Reynolds.

While this hasn’t been a great summer at the box office, the makers of the film can’t complain as they’ve taken the top spot two weeks in a row. While this is reason for a small celebration, the fun didn’t last for long.

A few days ago several high-quality copies of the film started to appear on various pirate sites. While movie leaks happen every day, it’s very unusual that it happens just a few days after the theatrical release. In several countries including Australia, China, and Germany, it hasn’t even premiered yet.

Many pirates appear to be genuinely surprised by the early release as well, based on various comments. “August 18 was the premiere, how did you do this magic?” one downloader writes.

“OK, this was nothing short of perfection. 8 days post theatrical release… perfect 1080p clarity… no hardcoded subs… English translation AND full English subs… 5.1 audio. Does it get any better?” another commenter added.

The pirated copies of the movie are tagged as a “Web-DL” which means that they were ripped from an online streaming service. While the source is not revealed anywhere, the movie is currently available on Netflix in Japan, which makes it a likely candidate.

Screenshot of the leak

While the public often call for a simultaneous theatrical and Internet release, the current leak shows that this might come with a significant risk.

It’s clear that The Hitman’s Bodyguard production company Millennium Films is going to be outraged. The company has taken an aggressive stance against piracy in recent years. Among other things, it demanded automated cash settlements from alleged BitTorrent pirates and is also linked to various ‘copyright troll’ lawsuits.

Whether downloaders of The Hitman’s Bodyguard will be pursued as well has yet to be seen. For now, there is still plenty of interest from pirates. The movie was the most downloaded title on BitTorrent last week and is still doing well.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pronunciation Training Machine

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pronunciation-training-machine/

Using a Raspberry Pi, an Arduino, an Adafruit NeoPixel Ring and a servomotor, Japanese makers HomeMadeGarbage produced this Pronunciation Training Machine to help their parents distinguish ‘L’s and ‘R’s when speaking English.

L R 発音矯正ギブス お母ちゃん編 Pronunciation training machine #right #light #raspberrypi #arduino #neopixel

23 Likes, 1 Comments – Home Made Garbage (@homemadegarbage) on Instagram: “L R 発音矯正ギブス お母ちゃん編 Pronunciation training machine #right #light #raspberrypi #arduino #neopixel”

How does an Pronunciation Training Machine work?

As you can see in the video above, the machine utilises the Google Cloud Speech API to recognise their parents’ pronunciation of the words ‘right’ and ‘light’. Correctly pronounce the former, and the servo-mounted arrow points to the right. Pronounce the later and the NeoPixel Ring illuminates because, well, you just said “light”.

An image showing how the project works - English Pronunciation TrainingYou can find the full code for the project on its hackster page here.

Variations on the idea

It’s a super-cute project with great potential, and the concept could easily be amended for other training purposes. How about using motion sensors to help someone learn their left from their right?

A photo of hands with left and right written on them - English Pronunciation Training

Wait…your left or my left?
image c/o tattly

Or use random.choice to switch on LEDs over certain images, and speech recognition to reward a correct answer? Light up a picture of a cat, for example, and when the player says “cat”, they receive a ‘purr’ or a treat?

A photo of a kitten - English Pronunciation Training

Obligatory kitten picture
image c/o somewhere on the internet!

Raspberry Pi-based educational aids do not have to be elaborate builds. They can use components as simple as a servo and an LED, and still have the potential to make great improvements in people’s day-to-day lives.

Your own projects

If you’ve created an educational tool using a Raspberry Pi, we’d love to see it. The Raspberry Pi itself is an educational tool, so you’re helping it to fulfil its destiny! Make sure you share your projects with us on social media, or pop a link in the comments below. We’d also love to see people using the Pronunciation Training Machine (or similar projects), so make sure you share those too!

A massive shout out to Artie at hackster.io for this heads-up, and for all the other Raspberry Pi projects he sends my way. What a star!

The post The Pronunciation Training Machine appeared first on Raspberry Pi.

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-streaming-instances-for-amazon-appstream-2-0/

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

[$] CentOS and ARM

Post Syndicated from jake original https://lwn.net/Articles/726441/rss

The CentOS distribution has long been
a boon to those who want an enterprise-level operating system without an
enterprise-level support contract—and the costs that go with it. In
keeping with its server orientation, CentOS has been largely focused on
x86 systems, but that has been changing over the last few
years. Jim Perrin has been with the project since 2004 and his talk at Open
Source Summit Japan
(OSSJ) described the process of making CentOS
available for the ARM server market; he also discussed the status of that
project and some plans for the future.

Scammers Pick Up NYAA Torrents Domain Name

Post Syndicated from Ernesto original https://torrentfreak.com/scammers-pick-up-nyaa-torrents-domain-name-170624/

For years NYAA Torrents was heralded as one of the top sources for anime content, serving an audience of millions of users.

This changed abruptly early last month when the site’s domain names were deactivated and stopped working.

TorrentFreak heard from several people, including site moderators and other people close to the site, that NYAA’s owner decided to close the site voluntarily. However, no comments were made in public.

While many former users moved on to other sites, some started to see something familiar when they checked their old bookmarks this week. All of a sudden, NYAA.eu was loading just fine, albeit with a twist.

“Due to the regulation & security issues with Bittorrent, the Nyaa Team has decided to move from torrent to a faster & secure part of the internet!” a message posted on the site reads.

Instead, the site says it’s going underground, encouraging visitors to download the brand new free “binary client.” At the same time, it warns against ‘fake’ NYAA sites.

“We wish we could keep up the torrent tracker, but it is to risky for our torrent crew as well as for our fans. Nyaa.se has been shut down as well. All other sites claiming to be the new Nyaa are Fake!”

Fake NYAA

The truth is, however, that the site itself is “fake.” After the domain name was deactivated it was put back into rotation by the .EU registry, allowing outsiders to pick it up. These people are now trying to monetize it with their download offer.

According to the Whois information, NYAA.eu is registered to the German company Goodlabs, which specializes in domain name monetization.

The client download link on the site points to a Goo.gl shorturl, which in turn redirects to an affiliate link for a Usenet service. At least, last time we checked.

The people who registered the domain hope that people will sign up there, assuming that it’s somehow connected to the old NYAA crew.

Thus far, over 27,000 people have clicked on the link in just a few days. This means that the domain name still generates significant traffic, mostly from Japan, The United States, and France.

While it is likely new to former NYAA users, this type of scam is pretty common. There are a few file-sharing related domains with similar messages, including Demonoid.to, Isohunts.to, All4nothin.net, Torrenthounds.com, Proxyindex.net, Ddgamez.com and many others.

Some offer links to affiliate deals and others point to direct downloads of .exe files. It’s safe to say, that it’s best to stay far away from all of these.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Specifying the kernel ABI

Post Syndicated from jake original https://lwn.net/Articles/726021/rss

At Open
Source Summit Japan
(OSSJ)—OSS is the new name for LinuxCon,
ContainerCon, and CloudOpen—Sasha Levin gave a talk on the kernel’s
application binary interface (ABI). There is an effort to create a kernel
ABI specification that has its genesis in a
discussion about fuzzers
at the 2016 Linux Plumbers Conference. Since
that time,
some progress on it has been made, so Levin described what the ABI is and the
benefits that would come from having a specification. He also covered
what has been done so far—and the
the extensive work remaining to be done.

Fail your way to perfection

Post Syndicated from Olympia Brown original https://www.raspberrypi.org/blog/fail-perfection/

As educators and makers at Raspberry Pi, we think a lot about failure and how to deal with it constructively. Much has been written about the importance of failure to design and engineering projects. It is undoubtedly true that you can learn a lot from your mistakes, like getting the wrong size of part, mistyping your code, or not measuring when doing your DIY. The importance of failure has even become a bit of a common trope: just think of those slightly annoying inspirational quotes attributed to famous historical figures which you find all over social media.

I-have-not-failed—Edison

I have not failed. I’ve just found 10,000 ways that won’t work. Thomas Edison.

Failure can be good!

But, as with many a cliché, there is an underlying truth that it is worth revisiting. Designing, engineering, and creating all involve making mistakes along the way. Even though failures feel bad, by reaching out when something goes wrong, you can call on the expertise of your community, learn, and make the final result better.

However, we often think failing also makes us look bad, so we don’t talk about it as an essential part of the process that got us to the end stage. We make things shiny and glossy to big-up our success, putting all the focus on the result. This tendency is, however, not necessarily helpful if we want to help educate others. As Jonathan Sanderson of NUSTEM puts it:

Jonathan Sanderson on Twitter

stem educators: worth noting: confessions of rank stupidity in digital making get responses, sympathy, offers of help on Twitter. (1/2)

Jonathan Sanderson on Twitter

yet our write-ups only feature the things we did right. Mis-steps and recovery from failure are key parts of process. (2/2)

The NUSTEM team truly believes in this: when sharing their builds, they include a section on what they would do differently next time. By highlighting the journey, and the mistakes made along the way, they are not only helping those that also want to go on that journey, they are also demystifying the process a bit.

Celebrate your fails

Because failure feels bad, we don’t routinely celebrate it. But there are niches where failure is celebrated: Simone Giertz’s (slightly sweary) YouTube videos are a great example. And then there is Hebocon, the Japanese competition for cruddy robots. In fact, the organisers of Hebocon make a great point: crafts that do not go as intended are interesting.

This is as much true when working with young people as it is in the wider world. In Pioneers, we also want to do our bit to celebrate failure. Our judges don’t just watch the teams’ videos to see how they overcame what went wrong along the way, they also have an award category that celebrates wrong turns and dead ends: ‘We appreciate what you’re trying to do’. Our first challenge‘s winning entry in this category was PiCymru’s We Shall Overcomb:

PiCymru : Make us Laugh Challenge

The video of the PiCymru teams Pioneer challenge entry! The team wasn’t able to get things to work the way they hoped, but wanted to share the joy of failure 🙂


The category name was suggested by our lovely judge from the first cycle, stand-up comedian Bec Hill: it’s one of the accepted heckles the audience can shout out at her stand-up scratch nights. Scratch nights are preview events at which a comedian tests new material, and they are allowed to fail on stage. We may not often think of comedy as embracing failure, but comedians do scratch nights specifically to learn from their mistakes, and to make the final product all the better for it. Interestingly, scratch nights are hugely popular with audiences.

So, if you’re working with a group of young people, what can you do to encourage learning from failure and not let them give up?

Helping you to fail better

In our book Ideas start here, for Pioneers mentors, we’ve given a few tips and phrases that can come in useful. For example, if someone says, “It isn’t working!”, you could respond with “Why not? Have you read the error log?” RTFM is a real thing, and an important skill for digital life.

We agree with engineer Prof Danielle George, who believes in being honest about your failures and highlighting their importance to where you’ve got now. “I fail a lot,” she says. “The trick is to embrace these failures; we don’t have to succeed the first time. We learn from our mistakes and move forwards.”

If, as a mentor, you’re not sure how to encourage and support those not used to failing, this article also has some more tips.

If nothing else helps, but you need to feel inspired, think about what someone said to Karen, who sucks at surfing:

Karen, you are actually pretty good at surfing. Keep in mind that billions of other humans wouldn’t dare even try.

How about you? If you have a story of what you learned from failure in one of your projects, share it in the comments!

Mistakes GIF – Find & Share on GIPHY

Discover & Share this Mistakes GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

The post Fail your way to perfection appeared first on Raspberry Pi.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

250,000 Pi Zero W units shipped and more Pi Zero distributors announced

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/pi-zero-distributors-annoucement/

This week, just nine weeks after its launch, we will ship the 250,000th Pi Zero W into the market. As well as hitting that pretty impressive milestone, today we are announcing 13 new Raspberry Pi Zero distributors, so you should find it much easier to get hold of a unit.

Raspberry Pi Zero W and Case - Pi Zero distributors

This significantly extends the reach we can achieve with Pi Zero and Pi Zero W across the globe. These new distributors serve Australia and New Zealand, Italy, Malaysia, Japan, South Africa, Poland, Greece, Switzerland, Denmark, Sweden, Norway, and Finland. We are also further strengthening our network in the USA, Canada, and Germany, where demand continues to be very high.

Pi Zero W - Pi Zero distributors

A common theme on the Raspberry Pi forums has been the difficulty of obtaining a Zero or Zero W in a number of countries. This has been most notable in the markets which are furthest away from Europe or North America. We are hoping that adding these new distributors will make it much easier for Pi-fans across the world to get hold of their favourite tiny computer.

We know there are still more markets to cover, and we are continuing to work with other potential partners to improve the Pi Zero reach. Watch this space for even further developments!

Who are the new Pi Zero Distributors?

Check the icons below to find the distributor that’s best for you!

Australia and New Zealand

Core Electronics - New Raspberry Pi Zero Distributors

PiAustralia Raspberry Pi - New Raspberry Pi Zero Distributors

South Africa

PiShop - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in South Africa, as we are waiting for ICASA Certification.

Denmark, Sweden, Finland, and Norway

JKollerup - New Raspberry Pi Zero Distributors

electro:kit - New Raspberry Pi Zero Distributors

Germany and Switzerland

sertronics - New Raspberry Pi Zero Distributors

pi-shop - New Raspberry Pi Zero Distributors

Poland

botland - New Raspberry Pi Zero Distributors

Greece

nettop - New Raspberry Pi Zero Distributors

Italy

Japan

ksy - New Raspberry Pi Zero Distributors

switch science - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in Japan as we are waiting for TELEC Certification.

Malaysia

cytron - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in Malaysia as we are waiting for SIRIM Certification

Canada and USA

buyapi - New Raspberry Pi Zero Distributors

Get your Pi Zero

For full product details, plus a complete list of Pi Zero distributors, visit the Pi Zero W page.

Awesome feature image GIF credit goes to Justin Mezzell

The post 250,000 Pi Zero W units shipped and more Pi Zero distributors announced appeared first on Raspberry Pi.

AWS Big Data Blog Month in Review: March 2017

Post Syndicated from Derek Young original https://aws.amazon.com/blogs/big-data/aws-big-data-blog-month-in-review-march-2017/

Another month of big data solutions on the Big Data Blog. Please take a look at our summaries below and learn, comment, and share. Thank you for reading!

Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena
In this blog post, walk through how to set up and use the recently released Amazon Athena CloudTrail SerDe to query CloudTrail log files for EC2 security group modifications, console sign-in activity, and operational account activity.  

Big Updates to the Big Data on AWS Training Course!
AWS offers a range of training resources to help you advance your knowledge with practical skills so you can get more out of the cloud. We’ve updated Big Data on AWS, a three-day, instructor-led training course to keep pace with the latest AWS big data innovations. This course allows you to hear big data best practices from an expert, get answers to your questions in person, and get hands-on practice using AWS big data services. 

Analyzing VPC Flow Logs with Amazon Kinesis Firehose, Amazon Athena, and Amazon QuickSight
In this blog post, build a serverless architecture using Amazon Kinesis Firehose, AWS Lambda, Amazon S3, Amazon Athena, and Amazon QuickSight to collect, store, query, and visualize flow logs. In building this solution, you also learn how to implement Athena best practices with regard to compressing and partitioning data so as to reduce query latencies and drive down query costs. 

Amazon Redshift Monitoring Now Supports End User Queries and Canaries
The serverless Amazon Redshift Monitoring utility lets you gather important performance metrics from your Redshift cluster’s system tables and persists the results in Amazon CloudWatch. You can now create your own diagnostic queries and plug-in “canaries” that monitor the runtime of your most vital end user queries. These user-defined metrics can be used to create dashboards and trigger Alarms and should improve visibility into workloads running on a Cluster.  

Running R on Amazon Athena
In this blog post, connect R/RStudio running on an Amazon EC2 instance with Athena. You’ll learn to build a simple interactive application with Athena and R. Athena can be used to store and query the underlying data for your big data applications using standard SQL, while R can be used to interactively query Athena and generate analytical insights using the powerful set of libraries that R provides. This post has been translated into Japanese. 

Top 10 Performance Tuning Tips for Amazon Athena
In this blog post, we review the top 10 tips that can improve query performance. We focus on aspects related to storing data in Amazon S3 and tuning specific to queries. Amazon Athena uses Presto to run SQL queries and hence some of the advice will work if you are running Presto on Amazon EMR. This post has been translated into Japanese. 

Big Data Resources on the AWS Knowledge Center
The AWS Knowledge Center answers the questions we receive most frequently from AWS customers. It is a resource for you that is distinct from AWS Documentation, the AWS Discussion Forums, and the AWS Support Center. It covers questions from across every AWS service. This post is an introduction to Big Data resources on the AWS Knowledge Center. 

Encrypt and Decrypt Amazon Kinesis Records Using AWS KMS
In this bog post, learn to build encryption and decryption into sample Kinesis producer and consumer applications using the Amazon Kinesis Producer Library (KPL), the Amazon Kinesis Consumer Library (KCL), AWS KMS, and the aws-encryption-sdk. The methods and the techniques used in this post to encrypt and decrypt Kinesis records can be easily replicated into your architecture.

Want to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

Leave a comment below to let us know what big data topics you’d like to see next on the AWS Big Data Blog.

APT10 and Cloud Hopper

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/apt10_and_cloud.html

There’s a new report of a nation-state attack, presumed to be from China, on a series of managed ISPs. From the executive summary:

Since late 2016, PwC UK and BAE Systems have been assisting victims of a new cyber espionage campaign conducted by a China-based threat actor. We assess this threat actor to almost certainly be the same as the threat actor widely known within the security community as ‘APT10’. The campaign, which we refer to as Operation Cloud Hopper, has targeted managed IT service providers (MSPs), allowing APT10 unprecedented potential access to the intellectual property and sensitive data of those MSPs and their clients globally. A number of Japanese organisations have also been directly targeted in a separate, simultaneous campaign by the same actor.

We have identified a number of key findings that are detailed below.

APT10 has recently unleashed a sustained campaign against MSPs. The compromise of MSP networks has provided broad and unprecedented access to MSP customer networks.

  • Multiple MSPs were almost certainly being targeted from 2016 onwards, and it is likely that APT10 had already begun to do so from as early as 2014.
  • MSP infrastructure has been used as part of a complex web of exfiltration routes spanning multiple victim networks.

[…]

APT10 focuses on espionage activity, targeting intellectual property and other sensitive data.

  • APT10 is known to have exfiltrated a high volume of data from multiple victims, exploiting compromised MSP networks, and those of their customers, to stealthily move this data around the world.
  • The targeted nature of the exfiltration we have observed, along with the volume of the data, is reminiscent of the previous era of APT campaigns pre-2013.

PwC UK and BAE Systems assess APT10 as highly likely to be a China-based threat actor.

  • It is a widely held view within the cyber security community that APT10 is a China-based threat actor.
  • Our analysis of the compile times of malware binaries, the registration times of domains attributed to APT10, and the majority of its intrusion activity indicates a pattern of work in line with China Standard Time (UTC+8).

  • The threat actor’s targeting of diplomatic and political organisations in response to geopolitical tensions, as well as the targeting of specific commercial enterprises, is closely aligned with strategic Chinese interests.

I know nothing more than what’s in this report, but it looks like a big one.

Press release.