Apply now for Picademy in Baltimore

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/apply-now-picademy-baltimore/

picademy-gif-2mb

Making computing accessible is a major part of the Raspberry Pi Foundation’s mission. Our low-cost, high-performance computer is just one way that we achieve that. With our Picademy program, we also train teachers so that more young people can learn about computers and how to make things with them.

Throughout 2016, we’re running a United States pilot of Picademy. Raspberry Pi Foundation’s commitment is to train 100 teachers on US soil this year and we’ve made another leap towards meeting that commitment last weekend with our second cohort, but more on that below.

DHF-Square-Lockup

In order to make Picademy more accessible for US educators, we’re happy to announce our third Picademy USA workshop, which will take place August 13 and 14 at the Digital Harbor Foundation in Baltimore, Maryland. Applications are open now and will close in early July. Please help us spread the word. We want to hear from all of the most enthusiastic and creative educators from all disciplines—not just computing. Picademy cohorts are made up of an incredible mixture of different types of educators from different subject areas. Not only will these educators learn about digital making from the Raspberry Pi education team, but they’ll be meeting and collaborating with a group of incredibly passionate peers.

To give you an idea of the passion and enthusiasm, I want to introduce you to our second US cohort of Raspberry Pi Certified Educators. Last weekend at the Computer History Museum, they gathered from all over North America to learn the ropes of digital making with Raspberry Pi and collaborate on projects together. They knocked it out of the park.

Our superhero Raspberry Pi Certified Educators! © Douglas Fairbairn Photography / Courtesy of the Computer History Museum

Our superhero Raspberry Pi Certified Educators! © Douglas Fairbairn Photography / Courtesy of the Computer History Museum

Peek into the #Picademy hashtag and you’ll get a small taste of what it’s like to be a part of this program:

Abby Almerido on Twitter

Sign of transformative learning = Unquenchable thirst for more #picademy Thank you @LegoJames @MattRichardson @ben_nuttall @olsonk408

Keith Baisley on Twitter

Such a fun/engaging weekend of learning,can’t thank you all enough @LegoJames @MattRichardson @ben_nuttall @EbenUpton and others #picademy

Peter Strawn on Twitter

Home from #Picademy. What an incredible weekend. Thank you, @Raspberry_Pi. Now to reflect and put my experience into action!

Dan Blickensderfer on Twitter

Pinned. What a great community. Thanks! #picademypic.twitter.com/TLLzjff0wF

Making Picademy a success takes a lot of work from many people. Thank you to: Lauren Silver, Kate McGregor, Stephanie Corrigan, and everyone at the Computer History Museum. Kevin Olson, a Raspberry Pi Certified Educator who stepped in to help facilitate the workshops. Kevin Malachowski, Ruchi Lohani, Sam Patterson, Jesse Lozano, and Eben Upton who mentored the educators. Sonia Uppal, Abhinav Mathur, and Keshav Saharia for presenting their amazing work with Raspberry Pi.

If you want to join our tribe and you can be in Baltimore on August 13th and 14th, please apply to be a part of our next Picademy in the United States! For updates on future Picademy workshops in the US, please click here to sign up for notifications.

The post Apply now for Picademy in Baltimore appeared first on Raspberry Pi.

Credential Stealing as an Attack Vector

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/credential_stea.html

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group — basically the country’s chief hacker — gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and — most importantly — incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

[$] task_diag and statx()

Post Syndicated from corbet original http://lwn.net/Articles/685791/rss

The interfaces supported by Linux to provide access to information about
processes and files have literally been around for decades. One might
think that, by this time, they would have reached a state of relative
perfection. But things are not so perfect that developers are deterred
from working on alternatives; the motivating factor in the two cases
studied here is the same: reducing the cost of getting information out of
the kernel while increasing the range of information that is available.

Click below (subscribers only) for the full article from this week’s Kernel
Page.

Vulns are sparse, code is dense

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/vulns-are-sparse-code-is-dense.html

The question posed by Bruce Schneier is whether vulnerabilities are “sparse” or “dense”. If they are sparse, then finding and fixing them will improve things. If they are “dense”, then all this work put into finding/disclosing/fixing them is really doing nothing to improve things — because there are still an unending amount of unpatched bugs still to be discovered.

I propose a third option: vulns are sparse, but code is dense.

In other words, we can secure specific things, like OpenSSL and Chrome, by researching the heck out of them, finding vulns, and patching them. The vulns in those projects are sparse — disclosing/fixing those bugs improve security.

But, the amount of code out there is enormous, considering all software in the world. And it changes fast — adding new vulns faster than our feeble efforts at disclosing/fixing them. Finding and disclosing bugs in random software (like IoT) devices has little marginal impact on the total bugs out there.

So measured across all software, no, the secure community hasn’t found any significant amount of bugs. But when looking at specific critical software, like OpenSSL and Chrome, I think we’ve made great strides forward.

More importantly, let’s ignore the actual benefits/costs of fixing bugs for the moment. What all this effort has done is teach us about the nature of vulns. Critical software is written today in a vastly more secure manner than it was in the 1980s, 1990s, or even the 2000s. Windows, for example, is vastly more secure. Sure, others are still lagging (car makers, medical device makers, IoT), but they are quickly learning the lessons and catching up. Finding a vuln in an iPhone is hard — so hard that hackers will earn $1 million doing it from the NSA rather than stealing your credit card info. 15 years ago, the opposite was true. The NSA didn’t pay for Windows vulns because they fell out of trees, and hackers made more money from hacking your computer.

My point is this: the “are vulns sparse/dense?” puts a straight-jacket around the debate. I see it form an orthogonal point of view. Vuln disclosure helps specific software, and the overall way to create software, even while code is so “dense” that disclosure makes no dent in the total number of vulns in the universe.

The Linux Embedded Development Environment launches

Post Syndicated from corbet original http://lwn.net/Articles/686180/rss

The Linux Embedded Development Environment (or LEDE) project, a fork (or
“spinoff”) of OpenWrt, has announced its existence. “We are building an embedded Linux distribution that makes it easy for
developers, system administrators or other Linux enthusiasts to build
and customize software for embedded devices, especially wireless routers.
[…]
Members of the project already include a significant share of the most
active members of the OpenWrt community.
We intend to bring new life to Embedded Linux development by creating a
community with a strong focus on transparency, collaboration and
decentralisation.
” The new project lives at lede-project.org.

(Thanks to Mattias Mattsson).

AWS Big Data Meetup May 5 in Palo Alto: Explore the Power of Machine Learning in the Cloud

Post Syndicated from Andy Werth original https://blogs.aws.amazon.com/bigdata/post/Tx1NLA8QCZ94812/AWS-Big-Data-Meetup-May-5-in-Palo-Alto-Explore-the-Power-of-Machine-Learning-in

Join and RSVP!

AWS Speaker

Guy Ernest, business development manager for machine learning services in AWS

"No Dr., or How I Learned to Stop Debugging and Love the Robot" In this talk, Guy will dicuss what developers must know to explore the power of machine learning services in the cloud. Using data to build machine learning models is a powerful alternative for heuristic or handwritten rules. This power is not limited to people with Ph.D. or MSc. in machine learning, statistics or computer science, but can be used successfully by competent developers. You will learn how to get started and how to think in machine learning terms when developing your next smart application.

Bring your laptop! Guy will walk you through simple steps for creating a numeric regression machine learning model using Amazon Machine Learning.

To gain background on machine learning in the cloud before the meetup, consider reading Guy’s blog posts on machine learning.

"Birds of a Feather" Discussions

After Guy’s discussion, we’ll break into groups to discuss topics of interest.

When & Where

May 5 @ 6:00 p.m. at the Mitchell Park Community Center, 3700 Middlefield Road, Palo Alto, CA (map)

—————————————————-

About AWS Big Data Meetups

The AWS Big Data Meetup brings Big Data developers and enthusiasts together to discuss Big Data solutions with each other and AWS team members. At the event you will hear speakers from AWS and the wider community who are pushing the boundaries of Big Data. We are committed to maintaining a technical focus, and invite you to participate as a guest or as a speaker. There will be food and plenty of time for one-on-one conversations.

Check out the video from an earlier meetup. Netflix talks about how they use Spark & Presto, and AWS speaker Rahul Bhartia discusses Hadoop Security.

Linux Kernel BPF JIT Spraying (grsecurity forums)

Post Syndicated from jake original http://lwn.net/Articles/686098/rss

Over at the grsecurity forums, Brad Spengler writes about a recently released proof of concept attack on the kernel using JIT spraying. “What happened next was the hardening of the BPF interpreter in grsecurity to prevent such future abuse: the previously-abused arbitrary read/write from the interpreter was now restricted only to the interpreter buffer itself, and the previous warn on invalid BPF instructions was turned into a BUG() to terminate execution of the exploit. I also then developed GRKERNSEC_KSTACKOVERFLOW which killed off the stack overflow class of vulns on x64.

A short time later, there was work being done upstream to extend the use of BPF in the kernel. This new version was called eBPF and it came with a vastly expanded JIT. I immediately saw problems with this new version and noticed that it would be much more difficult to protect — verification was being done against a writable buffer and then translated into another writable buffer in the extended BPF language. This new language allowed not just arbitrary read and write, but arbitrary function calling.”
The protections in the grsecurity kernel will thus prevent this attack. In addition, the newly released RAP feature for grsecurity, which targets the elimination of return-oriented programming (ROP) vulnerabilities in the kernel, will also ensure that “the fear of JIT spraying goes away completely“, he said.

How to Use Multiple Hard Drives With Time Machine

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/use-multiple-hard-drives-time-machine/

Time Machine logo and multiple RAID devices

Apple’s Time Machine software helps you create a backup of your Mac hard drive. That’s great, but what if something happens to the external drive you’re using for the Time Machine backup? If you’re following our 3-2-1 Backup Strategy, then you’ll be protected, but you can do more, too.

How about using multiple backup drives with Time Machine? Here’s how.

Some Background on Time Machine

Time Machine is more than just backup software for your Mac. You can think of it as a tool that keeps moments in time for you to look back on, so you can recover deleted or missing files or even revert to older versions of files you’ve worked on.

Get more details about Time Machine in our guide, How to Back Up Your Mac.

Time Machine stores hourly backups for 24 hours, daily backups for the past month, and weekly backups for all previous months for as much space as you have on your Time Machine backup drive. The oldest backups get deleted when the drive fills up. That makes it great for backups, but not great for archives, which require long-term storage. For more on the difference between backups and archives, see our post, What’s the Diff: Backup vs Archive.

Which Storage Devices Can be used with Time Machine?

Time Machine supports any of the following external storage devices.

  • External drive connected to your Mac, such as a USB, Thunderbolt, or FireWire drive
  • External drive connected to an AirPort Extreme Base Station (802.11ac model) or AirPort
  • Time Capsule (See our post, What’s the Diff: Time Machine vs. Time Capsule)
  • AirPort Time Capsule
  • Mac shared as a Time Machine backup destination
  • Network-attached storage (NAS) device that supports Time Machine over SMB

What is less well known is that you can use a single Time Machine backup drive with multiple Macs. If you have a large disk, you can partition it and use part of it for regular data and part of it for a Time Machine backup.

You also can use your Mac with more than one Time Machine backup drive. Let’s see how that works.

Use More than One Backup Disk Using Disk Rotation

Disk rotation is a technique borrowed from corporate IT professionals. The adage, “Don’t put all your eggs in one basket,” is the reason. While Time Machine is great backup software, it’s not foolproof. If your Time Machine backup drive dies — as hard drives eventually do — all of that data will be gone.

Are you interested in hard drive failure rates? So are we! You see, we use over 100,000 hard drives in our cloud data centers! Read our Hard Drive Reliability Stats to learn more.

Having Backblaze Personal Cloud Backup is a great way to fix that, of course, because your data is also backed up to the cloud. They’re complementary to one another: Backblaze tracks the last 30 days of changes to your files, for example, while Time Machine will keep track of as many changes as it can within the storage capacity of your backup drive. So it’s nice — ideal, really — to have both.

Tip: If you have a Backblaze backup in addition to Time Machine, and you’re planning to restore from Time Machine, it’s a good idea to save a restore from Backblaze prior to initiating your Time Machine restore in case anything goes wrong. We wrote a help page on the topic, Before You Restore With Time Machine.

Fortunately, Time Machine handles disk rotation with aplomb. You can attach a second hard drive and use it with Time Machine with only a couple of clicks. When Time Machine is connected to your first backup drive, it will back everything up. Then it’ll do the same for the second one. Time Machine backs up everything that’s changed on your Mac’s hard drive since the last time that backup drive was connected. So each drive will keep a complete Time Machine backup.

How to Use Multiple Backup Drives with Time Machine

  1. Connect your second hard drive to your Mac.
  2. Click on the Time Machine icon in the menu bar, then click on Open Time Machine preferences.
  3. Click Select Disk.
    Time Machine dialog showing opening preferences
  4. Select the drive you want to rotate, then click Use Disk.
  5. Time Machine will ask you if you want to replace your existing Time Machine drive, or use both drives. Click Use Both.
    Time Machine dialog showing disk preferences

Time Machine will now back up to each individual drive as they’re connected.

When you want to check on your Time Machine backups later, all you need to do is hold down the option key when clicking on the Time Machine icon in your menu bar. You’ll see Browse Other Backup Disks. You can use that to browse whichever Time Machine backup you’d like.

The same process works if you mix a Time Machine backup drive with Apple’s Time Capsule network device (a home Wi-Fi router with built in backup drive). You can back up to both without any problem.

Using this procedure, your data is backed up on two (or more) drives. You can leave one at home and leave the other in the office, for example. That way you’ll never be without a backup you can recover from quickly and easily.

Other Time Machine Backup Tips:

  • To exclude items from your backup, open Time Machine preferences, click Options, then click the Add (+) button to add an item to be excluded. To stop excluding an item, such as an external hard drive, select the item and click the Remove (–) button.
  • If using Time Machine to back up to a network disk, you can verify those backups to make sure they’re in good condition. Press and hold Option, then choose Verify Backups from the Time Machine menu.
  • In OS X Lion v10.7.3 or later, you can start up from your Time Machine disk, if necessary. Press and hold Option as your Mac starts up. When you see the Startup Manager screen, choose “EFI Boot” as the startup disk.
  • If you’re a Synology NAS user, you might be interested in our blog post, Backup and Restore Time Machine using Synology and the B2 Cloud.

Do You Use Time Machine for Local Backups?

Do you use Time Machine and have you set up a disk rotation scheme? Do you combine Time Machine with cloud backup? Or, do you still have questions? Let us know in the comments.

•  •  •

Editor’s Note:  This post was updated from May 3, 2016.

The post How to Use Multiple Hard Drives With Time Machine appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security advisories for Tuesday

Post Syndicated from ris original http://lwn.net/Articles/686081/rss

Debian-LTS has updated openjdk-7 (multiple vulnerabilities) and smarty3 (code execution).

Fedora has updated php (F23: multiple vulnerabilities).

Gentoo has updated git (multiple vulnerabilities).

Oracle has updated mercurial
(OL7: two vulnerabilities).

Scientific Linux has updated mercurial (SL7: two vulnerabilities).

Slackware has updated mercurial (code execution).

Ubuntu has updated libtasn1-3,
libtasn1-6
(15.10, 14.04, 12.04: denial of service), libtasn1-6 (16.04: denial of service), openssl (multiple vulnerabilities), poppler (15.10, 14.04, 12.04: multiple
vulnerabilities), and firefox (12.04:
denial of service).

How to Control Access to Your Amazon Elasticsearch Service Domain

Post Syndicated from Karthi Thyagarajan original https://blogs.aws.amazon.com/security/post/Tx3VP208IBVASUQ/How-to-Control-Access-to-Your-Amazon-Elasticsearch-Service-Domain

With the recent release of Amazon Elasticsearch Service (Amazon ES), you now can build applications without setting up and maintaining your own search cluster on Amazon EC2. One of the key benefits of using Amazon ES is that you can leverage AWS Identity and Access Management (IAM) to grant or deny access to your search domains. In contrast, if you were to run an unmanaged Elasticsearch cluster on AWS, leveraging IAM to authorize access to your domains would require more effort.

In this blog post, I will cover approaches for using IAM to set permissions for an Amazon ES deployment. I will start by considering the two broad options available for Amazon ES: resource-based permissions and identity-based permissions. I also will explain Signature Version 4 signing, and look at some real-world scenarios and approaches for setting Amazon ES permissions. Last, I will present an architecture for locking down your Amazon ES deployment by leveraging a proxy, while still being able to use Kibana for analytics.

Note: This blog post assumes that you are already familiar with setting up an Amazon ES cluster. To learn how to set up an Amazon ES cluster before proceeding, see New – Amazon Elasticsearch Service.

Options for granting or denying access to Amazon ES endpoints

In this section, I will provide details about how you can configure your Amazon ES domains so that only trusted users and applications can access them. In short, Amazon ES adds support for an authorization layer by integrating with IAM. You write an IAM policy to control access to the cluster’s endpoint, allowing or denying Actions (HTTP methods) against Resources (the domain endpoint, indices, and API calls to Amazon ES). For an overview of IAM policies, see Overview of IAM Policies.

You attach the policies that you build in IAM or in the Amazon ES console to specific IAM entities (in other words, the Amazon ES domain, users, groups, and roles):

  1. Resource-based policies – This type of policy is attached to an AWS resource, such as an Amazon S3 bucket, as described in Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket.
  2. Identity-based policies – This type of policy is attached to an identity, such as an IAM user, group, or role.

The union of all policies covering a specific entity, resource, and action controls whether the calling entity is authorized to perform that action on that resource

A note about authentication, which applies to both types of policies: you can use two strategies to authenticate Amazon ES requests. The first is based on the originating IP address. You can omit the Principal from your policy and specify an IP Condition. In this case, and barring a conflicting policy, any call from that IP address will be allowed access or be denied access to the resource in question. The second strategy is based on the originating Principal. In this case, you are required to include information that AWS can use to authenticate the requestor as part of every request to your Amazon ES endpoint, which you accomplish by signing the request using Signature Version 4. Later in this post, I provide an example of how you can sign a simple request against Amazon ES using Signature Version 4. With that clarification about authentication in mind, let’s start with how to configure resource-based policies.

How to configure resource-based policies

A resource-based policy is attached to the Amazon ES domain (accessible through the domain’s console) and enables you to specify which AWS account and which AWS users or roles can access your Amazon ES endpoint. In addition, a resource-based policy lets you specify an IP condition for restricting access based on source IP addresses. The following screenshot shows the Amazon ES console pane where you configure the resource-based policy of your endpoint.

In the preceding screenshot, you can see that the policy is attached to an Amazon ES domain called recipes1, which is defined in the Resource section of the policy. The policy itself has a condition specifying that only requests from a specific IP address should be allowed to issue requests against this domain (though not shown here, you can also specify an IP range using Classless Inter-Domain Routing [CIDR] notation).

In addition to IP-based restrictions, you can restrict Amazon ES endpoint access to certain AWS accounts or users. The following code shows a sample resource-based policy that allows only the IAM user recipes1alloweduser to issue requests. (Be sure to replace placeholder values with your own AWS resource information.)

{
  "Version": "2012-10-17",
  "Statement": [{
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:user/recipes1alloweduser"
      },      
      "Action": "es:*", 
      "Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*" 
    }   
  ] 
} 

This sample policy grants recipes1alloweduser the ability to perform any Amazon ES–related actions (represented by "Action":"es:*") against the recipes1 domain.

For the preceding policy, you must issue a Signature Version 4 signed request; see Examples of the Complete Version 4 Signing Process (Python) for more information. Because those examples are in Python, I am including the following code for Java developers that illustrates how to issue a Signature Version 4 signed request to an Amazon ES endpoint. The sample code shown breaks down the signing process into three main parts that are contained in the functions: generateRequest(), performSigningSteps(), and sendRequest(). Most of the action related to signing takes place in the performSigningSteps() function, and you will need to download and refer to the AWS SDK for Java to use classes such as AWS4Signer that are used in that function.

By using the SDK, you hand over all the heavy lifting associated with signing to the SDK. You simply have to set up the request, provide the key parameters required for signing (such as service name, region, and your credentials), and call the sign method on the AWS4Signer class. Be sure that you avoid hard-coding your credentials in your code.

/// Set up the request
private static Request<?> generateRequest() {
       Request<?> request = new DefaultRequest<Void>(SERVICE_NAME);
       request.setContent(new ByteArrayInputStream("".getBytes()));
       request.setEndpoint(URI.create(ENDPOINT));
       request.setHttpMethod(HttpMethodName.GET);
       return request;
}

/// Perform Signature Version 4 signing
private static void performSigningSteps(Request<?> requestToSign) {
       AWS4Signer signer = new AWS4Signer();
       signer.setServiceName(SERVICE_NAME);
       signer.setRegionName(REGION);      

       // Get credentials
       // NOTE: *Never* hard-code credentials
       //       in source code
       AWSCredentialsProvider credsProvider =
                     new DefaultAWSCredentialsProviderChain();

       AWSCredentials creds = credsProvider.getCredentials();

       // Sign request with supplied creds
       signer.sign(requestToSign, creds);
}

/// Send the request to the server
private static void sendRequest(Request<?> request) {
       ExecutionContext context = new ExecutionContext(true);

       ClientConfiguration clientConfiguration = new ClientConfiguration();
       AmazonHttpClient client = new AmazonHttpClient(clientConfiguration);

       MyHttpResponseHandler<Void> responseHandler = new MyHttpResponseHandler<Void>();
       MyErrorHandler errorHandler = new MyErrorHandler();

       Response<Void> response =
                     client.execute(request, responseHandler, errorHandler, context);
}

public static void main(String[] args) {
       // Generate the request
       Request<?> request = generateRequest();

       // Perform Signature Version 4 signing
       performSigningSteps(request);

       // Send the request to the server
       sendRequest(request);
}

Keep in mind that your own generateRequest method will be specialized to your application, including request type and content body. The values of the referenced variables (shown in red) are as follows.

private static final String SERVICE_NAME = "es";
private static final String REGION = "us-west-2";
private static final String HOST = "search-recipes1-xxxxxxxxx.us-west-2.es.amazonaws.com";
private static final String ENDPOINT_ROOT = "https://" + HOST;
private static final String PATH = "/";
private static final String ENDPOINT = ENDPOINT_ROOT + PATH;

Again, be sure to replace placeholder values with your own AWS resource information, including the host value, which is generated as part of the cluster creation process.

How to configure identity-based policies

In contrast to resource-based policies, with identity-based policies you can specify which actions an IAM identity can perform against one or more AWS resources, such as an Amazon ES domain or an S3 bucket. For example, the following sample inline IAM policy is attached to an IAM user.

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*",
   "Action": ["es:*"],
   "Effect": "Allow"
  }
 ],
}

By attaching the preceding policy to an identity, you give that identity the permission to perform any actions against the recipes1 domain. To issue a request against the recipes1 domain, you would use Signature Version 4 signing as described earlier in this post.

With Amazon ES, you can lock down access even further. Let’s say that you wanted to organize access based on job functions and roles, and you have three users who correspond to three job functions:

  • esadmin: The administrator of your Amazon ES clusters.
  • poweruser: A power user who can access all domains, but cannot perform management functions.
  • analyticsviewer: A user who can only read data from the analytics index.

Given this division of responsibilities, the following policies correspond to each user.

Policy for esadmin

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Resource": "arn:aws:es:us-west-2:111111111111:domain/*",
   "Action": ["es:*"],
   "Effect": "Allow"
  }
 ],
}

The preceding policy allows the esadmin user to perform all actions (es:*) against all Amazon ES domains in the us-west-2 region.

Policy for poweruser

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Resource": "arn:aws:es:us-west-2:111111111111:domain/*",
   "Action": ["es:*"],
   "Effect": "Allow"
  },
  {
   "Resource": "arn:aws:es:us-west-2:111111111111:domain/*",
   "Action": ["es: DeleteElasticsearchDomain",
              "es: CreateElasticsearchDomain"],
   "Effect": "Deny"
  }
 ],
}

The preceding policy gives the poweruser user the same permission as the esadmin user, except for the ability to create and delete domains (the Deny statement).

Policy for analyticsviewer

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Resource":
    "arn:aws:es:us-west-2:111111111111:domain/recipes1/analytics",
   "Action": ["es:ESHttpGet"],
   "Effect": "Allow"
  }
 ],
}

The preceding policy gives the analyticsviewer user the abiity to issue HttpGet requests against the analytics index that is part of the recipes1 domain. This is a limited policy that prevents the analyticsviewer user from performing any other actions against that index or domain.

For more details about configuring Amazon ES access policies, see Configuring Access Policies. The specific policies I just shared and any other policies you create can be associated with an AWS identity, group, or role, as described in Overview of IAM Policies.

Combining resource-based and identity-based policies

Now that I have covered the two types of policies that you can use to grant or deny access to Amazon ES endpoints, let’s take a look at what happens when you combine resource-based and identity-based policies. First, why would you want to combine these two types of policies? One use case involves cross-account access: you want to allow identities in a different AWS account to access your Amazon ES domain. You could configure a resource-based policy to grant access to that account ID, but an administrator of that account would still need to use identity-based policies to allow identities in that account to perform specific actions against your Amazon ES domain. For more information about how to configure cross-account access, see Tutorial: Delegate Access Across AWS Accounts Using IAM Roles.

The following table summarizes the results of mixing policy types.

One of the key takeaways from the preceding table is that a Deny always wins if one policy type has an Allow and there is a competing Deny in the other policy type. Also, when you do not explicitly specify a Deny or Allow, access is denied by default. For more detailed information about combining policies, see Policy Evaluation Basics.

Deployment considerations

With the discussion about the two types of policies in mind, let’s step back and look at deployment considerations. Kibana, which is a JavaScript-based UI that accompanies Elasticsearch and Amazon ES, allows you to extract valuable insights from stored data. When you deploy Amazon ES, you must ensure that the appropriate users (such as administrators and business intelligence analysts) have access to Kibana while also ensuring that you provide secure access from your applications to your various Amazon ES search domains.

When leveraging resource-based or identity-based policies to grant or deny access to Amazon ES endpoints, clients can use either anonymous or IP-based policies, or they can use policies that specify a Principal as part of the Signature Version 4 signed requests. In addition, because Kibana is JavaScript, requests originate from the end user’s IP address. This makes unauthenticated, IP-based access control impractical in most cases because of the sheer number of IP addresses that you may need to whitelist.

Given this IP-based access control limitation, you need a way to present Kibana with an endpoint that does not require Signature Version 4 signing. One approach is to put a proxy between Amazon ES and Kibana, and then set up a policy that allows only requests from the IP address of this proxy. By using a proxy, you only have to manage a single IP address (that of the proxy). I describe this approach in the following section.

Proxy-based access to Amazon ES from Kibana

As mentioned previously, a proxy can funnel access for clients that need to use Kibana. This approach still allows nonproxy–based access for other application code that can issue Signature Version 4 signed requests. The following diagram illustrates this approach, including a proxy to funnel Kibana access.

The key details of the preceding diagram are described as follows:

  1. This is your Amazon ES domain, which resides in your AWS account. IAM provides authorized access to this domain. An IAM policy provides whitelisted access to the IP address of the proxy server through which your Kibana client will connect.
  2. This is the proxy whose IP address is allowed access to your Amazon ES domain. You also could leverage an NGINX proxy, as described in the NGINX Plus on AWS whitepaper.
  3. Application code running on EC2 instances uses the Signature Version 4 signing process to issue requests against your Amazon ES domain.
  4. Your Kibana client application connects to your Amazon ES domain through the proxy.

To facilitate the security setup described in 1 and 2, you need a resource-based policy to lock down the Amazon ES domain. That policy follows.

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Resource":
    "arn:aws:es:us-west-2:111111111111:domain/recipes1/analytics",
   "Principal": {
        "AWS": "arn:aws:iam::111111111111:instance-profile/iprofile1"
   },
   "Action": ["es:ESHttpGet"],
   "Effect": "Allow"
  },
  {
   "Effect": "Allow",
   "Principal": {
     "AWS": "*"
   },
   "Action": "es:*",
   "Condition": {
     "IpAddress": {
       "aws:SourceIp": [
         "AAA.BBB.CCC.DDD"
       ]
     }
   },
   "Resource":
    "arn:aws:es:us-west-2:111111111111:domain/recipes1/analytics"
  }
 ],
}

This policy allows clients—such as the app servers in the VPC subnet shown in the preceding diagram—that are capable of sending Signature Version 4 signed requests to access the Amazon ES domain. At the same time, the policy allows Kibana clients to access the domain via a proxy, whose IP address is specified in the policy: AAA.BBB.CCC.DDD. For added security, you can configure this proxy so that it authenticates clients, as described in Using NGINX Plus and NGINX to Authenticate Application Users with LDAP.

Conclusion

Using the techniques in this post, you can grant or deny access to your Amazon ES domains by using resource-based policies, identity-based policies, or both. As I showed, when accessing an Amazon ES domain, you must issue Signature Version 4 signed requests, which you can accomplish using the sample Java code provided. In addition, by leveraging the proxy-based topology shown in the last section of this post, you can present the Kibana UI to users without compromising security.

If you have questions or comments about this blog post, please submit them in the “Comments” section below, or contact:

– Karthi

Pocket FM: independent radio in Syria

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/pocket-fm-independent-radio-syria/

When we started thinking about the Raspberry Pi project back in 2009, our ambitions were small, and very focussed on local education.

We realised we were doing something bigger than that pretty rapidly, but all the same, some of the projects we come across leave us shocked at their scale, their gravity and their importance. This is one of them.

"Do you have a radio? 87.7 FM"

Do you have a radio? 87.7 FM

In Syria, a German group called Media in Cooperation and Transition (MiCT) has been equipping towns with transmitters called PocketFM, built around Raspberry Pis, to provide Syrians with independent radio. Each transmitter has 4 to 6km (2.5 to 3.75 miles) of range, which is sufficient to reach a whole town.

In many parts of Syria, it’s impossible and politically unwise to build large transmitters, so a small device like PocketFM that can be easily concealed and transported, and that can be run off solar power or a car battery, is ideal.

pocketfm

A group of around a dozen independent Syrian radio stations has come together to form a group called Syrnet, who work together on programmes and topics and produce a joint station to be broadcast via the PocketFM transmitters; MiCT deal with the mix, distribution and transmission. “The variety of voices in a broadcast effectively illustrates Syria’s state of mind,” says one of the broadcasters. Using PocketFM, Syrnet is reaching 1.5 million citizens in north and north-western Syria, including Homs and Aleppo; they are currently making efforts to widen the network to more regions.

radio stations

The project is about enabling freedom of expression; it also strengthens feelings of solidarity. “We are not for anyone, or against anyone. No one can escape our criticism, even ourselves.”

Between them, the participating stations have access to hundreds of reporters. As well as news, music and entertainment, they’re broadcasting vital information on security, health and nutrition. “One of our strongest programmes is called Alternatives. It describes how to keep warm without any fuel, or how to pick up the internet signal of neighbouring countries when the Syrian internet is down. The difficulties of life – and how to overcome them.”

Syria Radio Network

Syria Radio Network (Syrnet) is an initiative to support independent radio production in Syria with professional training and outreach. Syrnet is a mixed live programme, sourced from Syrian radio stations. Our program is available 24 hours and seven days a week.

In a warzone, radio can be one of the easiest ways to get information. If the power grid is down, you just need batteries.

“We lost one device in Kobane”, says Philipp Hochleichter from MiCT, who is the project’s technical lead. “But due to the bombing – not due to a malfunction.”

“At the moment our journalists are safe with the opposition, but it’s still a war zone with gunfire and shelling,” said Marwa, a journalist with Hara FM, one of the Syrnet stations, based in Turkey.

“I worry about our staff in Aleppo, but no journalist can be 100% safe anywhere in the world.

“For any journalist, telling the truth puts them in danger.”

These bold people are doing something extraordinary. We send them all our very best wishes, and our hopes for a swift end to the conflict.

The post Pocket FM: independent radio in Syria appeared first on Raspberry Pi.

MISP – Malware Information Sharing Platform

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/iiOT5t53d-Y/

MISP, Malware Information Sharing Platform and Threat Sharing, is an open source software solution for collecting, storing, distributing and sharing cyber security indicators and threat about cyber security incidents analysis and malware analysis. MISP is designed by and for incident analysts, security and ICT professionals or malware reverser to…

Read the full post at darknet.org.uk

May Android security bulletin

Post Syndicated from corbet original http://lwn.net/Articles/686006/rss

The Android
security bulletin for May
is available. It lists 40 different CVE
numbers addressed by the May over-the-air update; the bulk of those are at
a severity level of “high” or above. “Partners were notified about
the issues described in the bulletin on April 04, 2016 or earlier. Source
code patches for these issues will be released to the Android Open Source
Project (AOSP) repository over the next 48 hours. We will revise this
bulletin with the AOSP links when they are available. The most severe of
these issues is a Critical security vulnerability that could enable remote
code execution on an affected device through multiple methods such as
email, web browsing, and MMS when processing media files.

Satoshi: how Craig Wright’s deception worked

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/satoshi-how-craig-wrights-deception.html

My previous post shows how anybody can verify Satoshi using a GUI. In this post, I’ll do the same, with command-line tools (openssl). It’s just a simple application of crypto (hashes, public-keys) to the problem.

I go through this step-by-step discussion in order to demonstrate Craig Wright’s scam. Dan Kaminsky’s post and the redditors comes to the same point through a different sequence, but I think my way is clearer.

Step #1: the Bitcoin address

We know certain Bitcoin addresses correspond to Satoshi Nakamoto him/her self. For the sake of discussion, we’ll use the address 15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP. It’s actually my address, but we’ll pretend it’s Satoshi’s. In this post, I’m going to prove that this address belongs to me.

The address isn’t the public-key, as you’d expect, but the hash of the public-key. Hashes are a lot shorter, and easier to pass around. We only pull out the public-key when we need to do a transaction. The hashing algorithm is explained on this website [http://gobittest.appspot.com/Address]. It’s basically base58(ripemd(sha256(public-key)).

Step #2: You get the public-key

Hashes are one-way, so given a Bitcoin address, we can’t immediately convert it into a public-key. Instead, we have to look it up in the blockchain, the vast public ledger that is at the heart of Bitcoin. The blockchain records every transaction, and is approaching 70-gigabytes in size.

To find an address’s match public-key, we have to search for a transaction where the bitcoin is spent. If an address has only received Bitcoins, then its matching public-key won’t appear in the Blockchain. In that case, a person trying to prove their identity will have to tell you the public-key, which is fine, of course, since the keys are designed to be public.

Luckily, there are lots of websites that store the blockchain in a database and make it easy for us to browse. I use Blockchain.info. The URL to my address is:

https://blockchain.info/address/15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP

There is a list of transactions here where I spend coin. Let’s pick the top one, at this URL:

https://blockchain.info/tx/8c4263d864d4f36e4eb4065a877e3e9a68cbe1de63a7b1fda70096e1e209cbbb

Toward the bottom are the “scripts”. Bitcoin has a small scripting language, allowing complex transactions to be created, but most transactions are simple. There are two common formats for these scripts, and old format and a new format. In the old format, you’ll find the public-key in the Output Script. In the new format, you’ll find the public-key in the Input Scripts. It’ll be a long long number starting with “04”.

In this case, my public-key is:

04b19ffb77b602e4ad3294f770130c7677374b84a7a164fe6a80c81f13833a673dbcdb15c29857ce1a23fca1c808b9c29404b84b986924e6ff08fb3517f38bc099

You can verify this hashes to my Bitcoin address by the website I mention above.

Step #3: You format the key according to OpenSSL

OpenSSL wants the public-key in it’s own format (wrapped in ASN.1 DER, then encoded in BASE64). I should just insert the JavaScript form to do it directly in this post, but I’m lazy. Instead, use the following code in the file “foo.js”:

KeyEncoder = require(‘key-encoder’);
sec = new KeyEncoder(‘secp256k1’);
args = process.argv.slice(2);
pemKey = sec.encodePublic(args[0], ‘raw’, ‘pem’);
console.log(pemKey);
Then run:
npm install key-encoder

node foo.js 04b19ffb77b602e4ad3294f770130c7677374b84a7a164fe6a80c81f13833a673dbcdb15c29857ce1a23fca1c808b9c29404b84b986924e6ff08fb3517f38bc099
This will output the following file pub.pem:
—–BEGIN PUBLIC KEY—–
MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEsZ/7d7YC5K0ylPdwEwx2dzdLhKehZP5q
gMgfE4M6Zz282xXCmFfOGiP8ocgIucKUBLhLmGkk5v8I+zUX84vAmQ==
—–END PUBLIC KEY—–
To verify that we have a correctly formatted OpenSSL public-key, we do the following command. As you can see, the hex of the OpenSSL public-key agrees with the original hex above 04b19ffb… that I got from the Blockchain: 
$ openssl ec -in pub.pem -pubin -text -noout
read EC key
Private-Key: (256 bit)
pub:
    04:b1:9f:fb:77:b6:02:e4:ad:32:94:f7:70:13:0c:
    76:77:37:4b:84:a7:a1:64:fe:6a:80:c8:1f:13:83:
    3a:67:3d:bc:db:15:c2:98:57:ce:1a:23:fc:a1:c8:
    08:b9:c2:94:04:b8:4b:98:69:24:e6:ff:08:fb:35:
    17:f3:8b:c0:99
ASN1 OID: secp256k1

Step #4: I create a message file

What are we are going to do is sign a message. That could be a message you create, that you test if I can decrypt. Or I can simply create my own message file.
In this example, I’m going to use the file message.txt:
Robert Graham is Satoshi Nakamoto

Obviously, if I can sign this file with Satoshi’s key, then I’m the real Satoshi.

There’s a problem here, though. The message I choose can be too long (such as when choosing a large work of Sartre). Or, in this case, depending on how you copy/paste the text into a file, it may end with varying “line-feeds” and “carriage-returns”. 

Therefore, at this stage, I may instead just choose to hash the message file into something smaller and more consistent. I’m not going to in my example, but that’s what Craig Wright does in his fraudulent example. And it’s important.

BTW, if you just echo from the command-line, or use ‘vi’ to create a file, it’ll automatically append a single line-feed. That’s what I assume for my message. In hex you should get:

$ xxd -i message.txt
unsigned char message_txt[] = {
  0x52, 0x6f, 0x62, 0x65, 0x72, 0x74, 0x20, 0x47, 0x72, 0x61, 0x68, 0x61,
  0x6d, 0x20, 0x69, 0x73, 0x20, 0x53, 0x61, 0x74, 0x6f, 0x73, 0x68, 0x69,
  0x20, 0x4e, 0x61, 0x6b, 0x61, 0x6d, 0x6f, 0x74, 0x6f, 0x0a
};
unsigned int message_txt_len = 34;

Step #5: I grab my private-key from my wallet

To prove my identity, I extract my private-key from my wallet file, and convert it into an OpenSSL file in a method similar to that above, creating the file priv.pem (the sister of the pub.pem that you create). I’m skipping the steps, because I’m not actually going to show you my private key, but they are roughly the same as above. Bitcoin-qt has a little “dumprivkey” command that’ll dump the private key, which I then wrap in OpenSSL ASN.1. If you want to do this, I used the following node.js code, with the “base-58” and “key-encoder” dependencies.
Base58 = require(“base-58”);
KeyEncoder = require(‘key-encoder’);
sec = new KeyEncoder(‘secp256k1’);
var args = process.argv.slice(2);
var x = Base58.decode(args[0]);
x = x.slice(1);
if (x.length == 36)
    x = x.slice(0, 32);
pemPrivateKey = sec.encodePrivate(x, ‘raw’, ‘pem’);
console.log(pemPrivateKey)

Step #6: I sign the message.txt with priv.pem

I then sign the file message.txt with my private-key priv.pem, and save the base64 encoded results in sig.b64.
openssl dgst -sign priv.pem message.txt | base64 >sig.b64
This produces the following file sig.b64 that hash the following contents:
MEUCIQDoy6K0xQ1cAPg7fXbQcmfbtK4VJ5wlMTzG4DaUV3zF9gIgLNbJw0oqj3lQf7lhe7TtPzse
PXf8GB3q4IhCiWVxTJ8=
How signing works is that it first creates a SHA256 hash of the file message.txt, then it encrypts it with the secp256k1 public-key algorithm. It wraps the result in a ASN.1 DER binary file. Sadly, there’s no native BASE64 file format, so I have to encode it in BASE64 myself in order to post on this page, and you’ll have to BASE64 decode it before you use it.

Step #6: You verify the signature

Okay, at this point you have three files. You have my public-key pub.pem, my messagemessage.txt, and the signature sig.b64.
First, you need to convert the signature back into binary:
base64 -d sig.b64 > sig.der
Now you run the verify command:
openssl dgst -verify pub.pem -signature sig.der message.txt
If I’m really who I say I am, and then you’ll see the result:
Verified OK
If something has gone wrong, you’ll get the error:
Verification Failure

How we know the Craig Wright post was a scam

This post is similarly structure to Craig Wright’s post, and in the differences we’ll figure out how he did his scam.
As I point out in Step #4 above, a large file (like a work from Sartre) would be difficult to work with, so I could just hash it, and put the binary hash into a file. It’s really all the same, because I’m creating some arbitrary un-signed bytes, then signing them.
But here’s the clever bit. If you’ve been paying attention, you’ll notice that the Sartre file has been hashed twice by SHA256, before the hash has been encrypted. In other words, it looks like the function:
secp256k1(sha256(sha256(message)))
Now let’s go back to Bitcoin transactions. Transactions are signed by first hashing twice::
secp256k1(sha256(sha256(transaction)))

Notice that the algorithms are the same. That’s how how Craig Write tried to fool us. Unknown to us, he grabbed a transaction from the real Satoshi, and grabbed the initial hash (see Update below for contents ). He then claimed that his “Sartre” file had that same hash:

479f9dff0155c045da78402177855fdb4f0f396dc0d2c24f7376dd56e2e68b05

Which signed (hashed again, then encrypted), becomes:

3045022100c12a7d54972f26d14cb311339b5122f8c187417dde1e8efb6841f55c34220ae0022066632c5cd4161efa3a2837764eee9eb84975dd54c2de2865e9752585c53e7cce
That’s a lie. How are we supposed to know? After all, we aren’t going to type in a bunch of hex digits then go search the blockchain for those bytes. We didn’t have a copy of the Sartre file to calculate the hash ourselves.
Now, when hashed an signed, the results from openssl exactly match the results from that old Bitcoin transaction. Craig Wright magically appears to have proven he knows Satoshi’s private-key, when in fact he’s copied the inputs/outputs and made us think we calculcated them.
It would’ve worked, too, but there’s too many damn experts in the blockchain who immediately pick up on the subtle details. There’s too many people willing to type in all those characters. Once typed in, it’s a simple matter of googling them to find them in the blockchain.
Also, it looks as suspicious as all hell. He explains the trivial bits, like “what is hashing”, with odd references to old publications, but then leaves out important bits. I had to write code in order to extract my own private-key from my wallet in order to make it into something that OpenSSL would accept — I step he didn’t actually have to go through, and thus, didn’t have to document.

Conclusion

Both Bitcoin and OpenSSL are just straightforward applications of basic crypto. It’s that they share the basics that made this crossover work. It’s by applying our basic crypto knowledge to the problem that catches him in the lie.
I write this post not really to catch Craig Wright in a scam, but to help teach basic crypto. Working backwards from this blogpost, learning the bits you didn’t understand, will teach you the important basics of crypto.

Appendix

To verify that I have that Bitcoin address, you’ll need the three files:
pub.pem

—–BEGIN PUBLIC KEY—–
MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEsZ/7d7YC5K0ylPdwEwx2dzdLhKehZP5q
gMgfE4M6Zz282xXCmFfOGiP8ocgIucKUBLhLmGkk5v8I+zUX84vAmQ==
—–END PUBLIC KEY—–

message.txt

Robert Graham is Satoshi Nakamoto

sig.b64

MEUCIQDoy6K0xQ1cAPg7fXbQcmfbtK4VJ5wlMTzG4DaUV3zF9gIgLNbJw0oqj3lQf7lhe7TtPzsePXf8GB3q4IhCiWVxTJ8=

Now run the following command, and verify it matches the hex value for the public-key that you found in the transaction in the blockchain:


openssl ec -in pub.pem -pubin -text -noout

Now verify the message:
base64 -d sig.b64 > sig.der
openssl dgst -verify pub.pem -signature sig.der message.txt

Update:

The lie can be condensed into two images. In the first is excerpts from his post, where he claims the file “Sartre” has the specific sha256sum and contains the shown text:
But, we know that this checksum matches instead an intermediate step in the 2009 Bitcoin transaction, which if put in a file, would have the following contents:
The sha256sum result is the same in both cases, so either I’m lying or Craig Wright is. You can verify for yourself which one is lying by creating your own Sartre file from this base64 encoded data (copy/paste into file, then base64 -d > Sartre to create binary file).

AQAAAAG6kcHV5VqeL6tOQfVbhipzskcZqtE6Un0WnB+tO2O1EgEAAABDQQQR25Ph3NuKAWtJhA+MU7wetoo4LpexSC7K17FIppCaXLLg6t37hMz5dERk+C4WC/qbi2T51MA/mZuGQ/ZWtBKjrP////8CAMqaOwAAAABDQQS+2CfTdHS+/7N+/lM3AawffGAJV6RIe+izcTRvAWgm7m9XujDYikcqDk7NLwdZmnlfHwHeeNeRs4LmXuHFi0UIrADSSWsAAAAAQ0EEEduT4dzbigFrSYQPjFO8HraKOC6XsUguytexSKaQmlyy4Ord+4TM+XREZPguFgv6m4tk+dTAP5mbhkP2VrQSo6wAAAAAAQAAAA==

I got this file from https://rya.nc/sartre.html, after spending an hour looking for the right tool. Transactions are verified using a script within the transactions itself. At some intermediate step, it transmogrifies the transaction into something else, then verifies it. It’s this transmogrified form of the transaction that we need to grab for the contents of the “Sartre” file.

Satoshi: That’s not how any of this works

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/05/satoshi-thats-not-how-any-of-this-works.html

In this WIRED article, Gaven Andresen says why he believes Craig Wright’s claim to be Satoshi Nakamoto:

“It’s certainly possible I was bamboozled,” Andresen says. “I could spin stories of how they hacked the hotel Wi-fi so that the insecure connection gave us a bad version of the software. But that just seems incredibly unlikely. It seems the simpler explanation is that this person is Satoshi.”

That’s not how this works. That’s not how any of this works.

The entire point of Bitcoin is that it’s decentralized. We don’t need to take Andresen’s word for it. We don’t need to take anybody’s word for it. Nobody needs to fly to London and check it out on a private computer. Instead, you can just send somebody the signature, and they can verify it themselves. That the story was embargoed means nothing — either way, Andresen was constrained by an NDA. Since they didn’t do it the correct way, and were doing it the roundabout way, the simpler explanation is that he was being bamboozled.

Below is an example of this, using the Electrum Bitcoin wallet software:

This proves that the owner of the Bitcoin Address has signed the Message, producing the Signature. I typed the first two fields, hit the “Sign” button. The wallet looked up the address in my wallet (which can have many addresses), found the matching private key that only I posess, then signed the message by filling in the bottom window.

If you had reason to believe that this address belonged to Satoshi Nakamoto, such as if it had been the first blocks, then I would have just proven to you that I am indeed Satoshi. You wouldn’t need to take anybody’s word for it. You’d simply type in the fields (or copy/paste), hit “verify”, and verify for yourself.

So you can verify me, here are the strings you can copy/paste:

Robert Graham is Satoshi Nakamoto

15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP 

GyMgaHVszLSej/VuCdeXnMmiB/d6rBrghQ3qR6XvabZtBrzF8vOA1IW4MnhNfcLny1N15pSZw16JlmQWss7y3zM=

You should get either a “Signature verified” or “Wrong signature” message when you click the “Verify” button.

There may be a little strangeness since my original message is ASCII, but if you copy out of this webpage, it’ll go in as Unicode, but it appears that this formatting information is ignored in the verification process, so it’ll still work.

Summary

Occam’s Razor is that Andresen was tricked. There was no reason to fly him to London otherwise. They could’ve just sent him an email with a message, a signature, and an address, and Andresen could’ve verified it himself.

GE Oil & Gas – Digital Transformation in the Cloud

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ge-oil-gas-digital-transformation-in-the-cloud/

GE Oil & Gas is a relatively young division of General Electric, the product of a series of acquisitions made by parent company General Electric starting in the late 1980s. Today GE Oil &Gas is pioneering the digital transformation of the company. In the guest post below, Ben Cabanas, the CTO of GE Transportation and formerly the cloud architect for GE Oil & Gas, talks about some of the key steps involved in a major enterprise cloud migration, the theme of his recent presentation at the 2016 AWS Summit in Sydney, Australia.

You may also want to learn more about Enterprise Cloud Computing with AWS.


Jeff;


Challenges and Transformation
GE Oil & Gas is at the forefront of GE’s digital transformation, a key strategy for the company going forward. The division is also operating at a time when the industry is facing enormous competitive and cost challenges, so embracing technological innovation is essential. As GE CIO Jim Fowler has noted, today’s industrial companies have to become digital innovators to thrive.

Moving to the cloud is a central part of this transformation for GE. Of course, that’s easier said than done for a large enterprise division of our size, global reach, and role in the industry. GE Oil & Gas has more than 45,000 employees working across 11 different regions and seven research centers. About 85 percent of the world’s offshore oil rigs use our drilling systems, and we spend $5 billion annually on energy-related research and development—work that benefits the entire industry. To support all of that work, GE Oil & Gas has about 900 applications, part of a far larger portfolio of about 9,000 apps used across GE. A lot of those apps may have 100 users or fewer, but are still vital to the business, so it’s a huge undertaking to move them to the cloud.

Our cloud journey started in late 2013 with a couple of goals. We wanted to improve productivity in our shop floors and manufacturing operations. We sought to build applications and solutions that could reduce downtime and improve operations. Most importantly, we wanted to cut costs while improving the speed and agility of our IT processes and infrastructure.

Iterative Steps
Working with AWS Professional Services and Sogeti, we launched the cloud initiative in 2013 with a highly iterative approach. In the beginning, we didn’t know what we didn’t know, and had to learn agile as well as how to move apps to the cloud. We took steps that, in retrospect, were crucial in supporting later success and accelerated cloud adoption. For example, we sent more than 50 employees to Seattle for training and immersion in AWS technologies so we could keep critical technical IP in-house. We built foundational services on AWS, such as monitoring, backup, DNS, and SSO automation that, after a year or so, fostered the operational maturity to speed the cloud journey. In the process, we discovered that by using AWS, we can build things at a much faster pace than what we could ever accomplish doing it internally.

Moving to AWS has delivered both cost and operational benefits to GE Oil & Gas.

We architected for resilience, and strove to automate as much as possible to reduce touch times. Because automation was an overriding consideration, we created a “bot army” that is aligned with loosely coupled microservices to support continuous development without sacrificing corporate governance and security practices. We built in security at every layer with smart designs that could insulate and protect GE in the cloud, and set out to measure as much as we could—TCO, benchmarks, KPIs, and business outcomes. We also tagged everything for greater accountability and to understand the architecture and business value of the applications in the portfolio.

Moving Forward
All of these efforts are now starting to pay off. To date, we’ve realized a 52 percent reduction in TCO. That stems from a number of factors, including the bot-enabled automation, a push for self-service, dynamic storage allocation, using lower-cost VMs when possible, shutting off compute instances when they’re not needed, and moving from Oracle to Amazon Aurora. Ultimately, these savings are a byproduct of doing the right thing.

The other big return we’ve seen so far is an increase in productivity. With more resilient, cloud-enabled applications and a focus on self-service capability, we’re getting close to a “NoOps” environment, one where we can move away from “DevOps” and “ArchOps,” and all the other “ops,” using automation and orchestration to scale effectively without needing an army of people. We’ve also seen a 50 percent reduction in “tickets” and a 98 percent reduction in impactful business outages and incidents—an unexpected benefit that is as valuable as the cost savings.

For large organizations, the cloud journey is an extended process. But we’re seeing clear benefits and, from the emerging metrics, can draw a few conclusions. NoOps is our future, and automation is essential for speed and agility—although robust monitoring and automation require investments of skill, time, and money. People with the right skills sets and passion are a must, and it’s important to have plenty of good talent in-house. It’s essential to partner with business leaders and application owners in the organization to minimize friction and resistance to what is a major business transition. And we’ve found AWS to be a valuable service provider. AWS has helped move a business that was grounded in legacy IT to an organization that is far more agile and cost efficient in a transformation that is adding value to our business and to our people.

— Ben Cabanas, Chief Technology Officer, GE Transportation

 

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close