Tag Archives: omd

AWS Partner Webinar Series – September & October 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-partner-webinar-series-september-october-2017/

The wait is over. September and October’s Partner Webinars have officially arrived! In case you missed the intro last month, the AWS Partner Webinar Series is a selection of live and recorded presentations covering a broad range of topics at varying technical levels and scale. A little different from our AWS Online TechTalks, each AWS Partner Webinar is hosted by an AWS solutions architect and an AWS Competency Partner who has successfully helped customers evaluate and implement the tools, techniques, and technologies of AWS.

 

 

September & October Partner Webinars:

 

SAP Migration
Velocity: How EIS Reduced Costs by 20% and Optimized SAP by Leveraging the Cloud
September 19, 2017 | 10:00 AM PDT

 

Mactores: SAP on AWS: How UCT is Experiencing Better Performance on AWS While Saving 60% in Infrastructure Costs with Mactores
September 19, 2017 | 1:00 PM PDT

 

Accenture: Reduce Operating Costs and Accelerate Efficiency by Migrating Your SAP Applications to AWS with Accenture
September 20, 2017 | 10:00 AM PDT

 

Capgemini: Accelerate your SAP HANA Migration with Capgemini & AWS FAST
September 21, 2017 | 10:00 AM PDT

 

Salesforce
Salesforce IoT: Monetize your IOT Investment with Salesforce and AWS
September 27, 2017 | 10:00 am PDT

 

Salesforce Heroku: Build Engaging Applications with Salesforce Heroku and AWS
October 10, 2017 | 10:00 AM PDT

 

Windows Migration
Cascadeo: How a National Transportation Software Provider Migrated a Mission-Critical Test Infrastructure to AWS with Cascadeo
September 26, 2017 | 10:00 AM PDT

 

Datapipe: Optimize App Performance and Security by Managing Microsoft Workloads on AWS with Datapipe
September 27, 2017 | 10:00 AM PDT

 

Datavail: Datavail Accelerates AWS Adoption for Sony DADC New Media Solutions
September 28, 2017 | 10:00 AM PDT

 

Life Sciences

SAP, Deloitte & Turbot: Life Sciences Compliance on AWS
October 4, 2017 | 10:00 AM PDT

 

Healthcare

AWS, ClearData & Cloudticity: Healthcare Compliance on AWS 
October 5, 2017 | 10:00 AM PDT

 

Storage

N2WS: Learn How Goodwill Industries Ensures 24/7 Data Availability on AWS
October 10, 2017 | 8:00 AM PDT

 

Big Data

Zoomdata: Taking Complexity Out of Data Science with AWS and Zoomdata
October 10, 2017 | 10:00 AM PDT

 

Attunity: Cardinal Health: Moving Data to AWS in Real-Time with Attunity 
October 11, 2017 | 11:00 AM PDT

 

Splunk: How TrueCar Gains Actionable Insights with Splunk Cloud
October 18, 2017 | 9:00 AM PDT

Encrypt and Decrypt Amazon Kinesis Records Using AWS KMS

Post Syndicated from Temitayo Olajide original https://aws.amazon.com/blogs/big-data/encrypt-and-decrypt-amazon-kinesis-records-using-aws-kms/

Customers with strict compliance or data security requirements often require data to be encrypted at all times, including at rest or in transit within the AWS cloud. This post shows you how to build a real-time streaming application using Kinesis in which your records are encrypted while at rest or in transit.

Amazon Kinesis overview

The Amazon Kinesis platform enables you to build custom applications that analyze or process streaming data for specialized needs. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and transaction tracking events.

Through the use of HTTPS, Amazon Kinesis Streams encrypts data in-flight between clients which protects against someone eavesdropping on records being transferred. However, the records encrypted by HTTPS are decrypted once the data enters the service. This data is stored at rest for 24 hours (configurable up to 168 hours) to ensure that your applications have enough headroom to process, replay, or catch up if they fall behind.

Walkthrough

In this post you build encryption and decryption into sample Kinesis producer and consumer applications using the Amazon Kinesis Producer Library (KPL), the Amazon Kinesis Consumer Library (KCL), AWS KMS, and the aws-encryption-sdk. The methods and the techniques used in this post to encrypt and decrypt Kinesis records can be easily replicated into your architecture. Some constraints:

  • AWS charges for the use of KMS API requests for encryption and decryption, for more information see AWS KMS Pricing.
  • You cannot use Amazon Kinesis Analytics to query Amazon Kinesis Streams with records encrypted by clients in this sample application.
  • If your application requires low latency processing, note that there will be a slight hit in latency.

The following diagram shows the architecture of the solution.

Encrypting the records at the producer

Before you call the PutRecord or PutRecords API, you will encrypt the string record by calling KinesisEncryptionUtils.toEncryptedString.

In this example, we used a sample stock sales ticker object:

example {"tickerSymbol": "AMZN", "salesPrice": "900", "orderId": "300", "timestamp": "2017-01-30 02:41:38"}. 

The method (KinesisEncryptionUtils.toEncryptedString) call takes four parameters:

  • amazonaws.encryptionsdk.AwsCrypto
  • stock sales ticker object
  • amazonaws.encryptionsdk.kms.KmsMasterKeyProvider
  • util.Map of an encryption context

A ciphertext is returned back to the main caller which is then also checked for size by calling KinesisEncryptionUtils.calculateSizeOfObject. Encryption increases the size of an object. To prevent the object from being throttled, the size of the payload (one or more records) is validated to ensure it is not greater than 1MB. In this example encrypted records sizes with payload exceeding 1MB are logged as warning. If the size is less than the limit, then either addUserRecord or PutRecord and PutRecords are called if you are using the KPL or the Kinesis Streams API respectively 

Example: Encrypting records with KPL

//Encrypting the records
String encryptedString = KinesisEncryptionUtils.toEncryptedString(crypto, ticker, prov,context);
log.info("Size of encrypted object is : "+ KinesisEncryptionUtils.calculateSizeOfObject(encryptedString));
//check if size of record is greater than 1MB
if(KinesisEncryptionUtils.calculateSizeOfObject(encryptedString) >1024000)
   log.warn("Record added is greater than 1MB and may be throttled");
//UTF-8 encoding of encrypted record
ByteBuffer data = KinesisEncryptionUtils.toEncryptedByteStream(encryptedString);
//Adding the encrypted record to stream
ListenableFuture<UserRecordResult> f = producer.addUserRecord(streamName, randomPartitionKey(), data);
Futures.addCallback(f, callback);

In the above code, the example sales ticker record is passed to the KinesisEncryptionUtils.toEncryptedString and an encrypted record is returned. The encryptedRecord value is also passed to KinesisEncryptionUtils.calculateSizeOfObject and the size of the encrypted payload is returned and checked to see if it is less than 1MB. If it is, the payload is then UTF-8 encoded (KinesisEncryptionUtils.toEncryptedByteStream), then sent to the stream for processing.

Example: Encrypting the records with Streams PutRecord

//Encrypting the records
String encryptedString = KinesisEncryptionUtils.toEncryptedString(crypto, ticker, prov, context);
log.info("Size of encrypted object is : " + KinesisEncryptionUtils.calculateSizeOfObject(encryptedString));
//check if size of record is greater than 1MB
if (KinesisEncryptionUtils.calculateSizeOfObject(encryptedString) > 1024000)
    log.warn("Record added is greater than 1MB and may be throttled");
//UTF-8 encoding of encryptyed record
ByteBuffer data = KinesisEncryptionUtils.toEncryptedByteStream(encryptedString);
putRecordRequest.setData(data);
putRecordRequest.setPartitionKey(randomPartitionKey());
//putting the record into the stream
kinesis.putRecord(putRecordRequest);

Verifying that records are encrypted

After the call to KinesisEncryptionUtils.toEncryptedString, you can print out the encrypted string record just before UTF-8 encoding. An example of what is printed to standard output when running this sample application is shown below.

[main] INFO kinesisencryption.streams.EncryptedProducerWithStreams - String Record is TickerSalesObject{tickerSymbol='FB', salesPrice='184.285409142', orderId='2a0358f1-9f8a-4bbe-86b3-c2929047e15d', timeStamp='2017-01-30 02:41:38'} and Encrypted Record String is AYADeMf6zmVg9JvIkGNv5M39rhUAbgACAAdLaW5lc2lzAARjYXJzABVhd3MtY3J5cHRvLXB1YmxpYy1rZXkAREFpUkpCaG1UOFQ3UTZQZ253dm9FSU9iUDZPdE1xTHdBZ1JjNlZxN2doMDZ3QlBEWUZndWdJSEFKaXNvT0ZPUGsrdz09AAEAB2F3cy1rbXMAS2Fybjphd3M6a21zOnVzLWVhc3QtMTo1NzM5MDY1ODEwMDI6a2V5LzM3ZGM5MGRjLTNmMWMtNGE3Ny1hNTFkLWE2NTNiMTczZmNkYgCnAQEBAHgbPoaYTiF/oIMp49yPBkZmVVotylZpUqwkkzJJicLjLQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDCCYBk+hfB3tOGVx7QIBEIA7FqaEcOWpic+gKNeT+dUe4yttB9dsZSFPAUTlz2L2zlyLXSLMh1otRH24SO485ov+TCTtRCgiA8a9rYQCAAAAAAwAABAArlGWPO8BavNSJIpJOtJekRUhOwbM+WM1NBVXB/////8AAAABXNZnRND3J7u8EZx3AAAAkfSxVPMYUv0Ovrd4AIUTmMcaiR0Z+IcJNAXqAhvMmDKpsJaQG76Q6pYExarolwT+6i87UOi6TGvAiPnH74GbkEniWe66rAF6mOra2JkffK6pBdhh95mEOGLaVPBqs2jswUTfdcBJQl9NEb7wx9XpFX8fNDF56Vly7u6f8OQ7lY6fNrOupe5QBFnLvwehhtogd72NTQ/yEbDDoPKUZN3IlWIEAGYwZAIwISFw+zdghALtarsHSIgPMs7By7/Yuda2r3hqSmqlCyCXy7HMFIQxHcEILjiLp76NAjB1D8r8TC1Zdzsfiypi5X8FvnK/6EpUyFoOOp3y4nEuLo8M2V/dsW5nh4u2/m1oMbw=

You can also verify that the record stayed encrypted in Streams by printing out the UTF-8 decoded received record immediately after the getRecords API call. An example of the print output when running the sample application is shown below.

[Thread-2] INFO kinesisencryption.utils.KinesisEncryptionUtils - Verifying object received from stream is encrypted. -Encrypted UTF-8 decoded : AYADeBJz/kt7Fm3L1lvS8Wy8jhAAbgACAAdLaW5lc2lzAARjYXJzABVhd3MtY3J5cHRvLXB1YmxpYy1rZXkAREFrM2N4K2s1ODJuOGVlNWF3TVJ1dk1UUHZQc2FHeGoxQisxb09kNWtDUExHYjJDS0lMZW5LSnlYakRmdFR4dzQyUT09AAEAB2F3cy1rbXMAS2Fybjphd3M6a21zOnVzLWVhc3QtMTo1NzM5MDY1ODEwMDI6a2V5LzM3ZGM5MGRjLTNmMWMtNGE3Ny1hNTFkLWE2NTNiMTczZmNkYgCnAQEBAHgbPoaYTiF/oIMp49yPBkZmVVotylZpUqwkkzJJicLjLQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDAGI3oWLlIJ2p6kffQIBEIA7JVUOTsLtEyNK8vS4GIS9iyTejuB2xhIpRXfG8o0lUfHawcrCbNbNH8XLm/8RW5JbgXo10EpOs8dSjkICAAAAAAwAABAAy64r24sGVKWN4C1gXCwJYHvZkLpJJj16SZlhpv////8AAAABg2pPFchIiaM7D9VuAAAAkwh10ul5sZQ08KsgkFszOOvFoQu95CiY7cK8H+tBloVOZglMqhhhvoIIZLr9hmI8/lQvRXzGDdo7Xkp0FAT5Jpztt8Hq/ZuLfZtNYIWOw594jShqpZt6uXMdMnpb/38R3e5zLK5vrYkM6NS4WPMFrHsOKN5tn0CDForgojRcdpmCJ8+cWLNltb2S+EJiWiyWS+ibw2vJ/RFm6WZO6nD+MXn3vyMAZzBlAjAuIUTYL1cbQ3ENxDIeXHJAWQguNPqxq4HgaCmCEI9/rn/GAKSc2nT9ln3UsVq/2dgCMQC7yNJ3DCTnppavfxTbcVS+rXaDDpZZx/ZsluMqXAFM5/FFvKRqr0dVML28tGunxmU=

Decrypting the records at the consumer

After you receive the records into your consumer as a list, you can get the data as a ByteBuffer by calling record.getData. You then decode and decrypt the byteBuffer by calling the KinesisEncryptionUtils.decryptByteStream. This method takes five parameters:

  • amazonaws.encryptionsdk.AwsCrypto
  • record ByteBuffer
  • amazonaws.encryptionsdk.kms.KmsMasterKeyProvider
  • key arn string
  • java.util.Map of your encryption context

A string representation of the ticker sales object is returned back to the caller for further processing. In this example, this representation is just printed to standard output.

[Thread-2] INFO kinesisencryption.streams.DecryptShardConsumerThread - Decrypted Text Result is TickerSalesObject{tickerSymbol='AMZN', salesPrice='304.958313333', orderId='50defaf0-1c37-4e84-85d7-bc15597355eb', timeStamp='2017-01-30 02:41:38'}

Example: Decrypting records with the KCL and Streams API

ByteBuffer buffer = record.getData();
//Decrypting the encrypted record data
String decryptedResult = KinesisEncryptionUtils.decryptByteStream(crypto,buffer,prov,this.getKeyArn(), this.getContext());
log.info("Decrypted Text Result is " + decryptedResult);

With the above code, records in the Kinesis Streams are decrypted using the same key ARN and encryption context that was previously used to encrypt it at the producer side.

Maven dependencies

To use the implementation I’ve outlined in this post, you need to use a few maven dependencies outlined below in the pom.xml together with the Bouncy Castle libraries. Bouncy Castle provides a cryptography API for Java.

 <dependency>
        <groupId>org.bouncycastle</groupId>
        <artifactId>bcprov-ext-jdk15on</artifactId>
        <version>1.54</version>
    </dependency>
<dependency>
   <groupId>com.amazonaws</groupId>
   <artifactId>aws-encryption-sdk-java</artifactId>
   <version>0.0.1</version>
</dependency>

Summary

You may incorporate above sample code snippets or use it as a guide in your application code to just start encrypting and decrypting your records to and from an Amazon Kinesis Stream.

A complete producer and consumer example application and a more detailed step-by-step example of developing an Amazon Kinesis producer and consumer application on AWS with encrypted records is available at the kinesisencryption github repository.

If you have questions or suggestions, please comment below.


About the Author

Temitayo Olajide is a Cloud Support Engineer with Amazon Web Services. He works with customers to provide architectural solutions, support and guidance to implementing high velocity streaming data applications in the cloud. In his spare time, he plays ping-pong and hangs out with family and friends

 

 


Related

Secure Amazon EMR with Encryption

 

 

Assert() in the hands of bad coders

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/assert-in-hands-of-bad-coders.html

Using assert() creates better code, as programmers double-check assumptions. But only if used correctly. Unfortunately, bad programmers tend to use them badly, making code worse than if no asserts were used at all. They are a nuanced concept that most programmers don’t really understand.

We saw this recently with the crash of “Bitcoin Unlimited”, a version of Bitcoin that allows more transactions. They used an assert() to check the validity of input, and when they received bad input, most of the nodes in the network crashed.

The Bitcoin Classic/Unlimited code is full of bad uses of assert. The following examples are all from the file main.cpp.



Example #1this line of code:

  1.     if (nPos >= coins->vout.size() || coins->vout[nPos].IsNull())
  2.         assert(false); 

This use of assert is silly. The code should look like this:

  1.     assert(nPos < coins->vout.size());
  2.     assert(!coins->vout[nPos].IsNull());

This is the least of their problems. It understandable that as code ages, and things are added/changed, that odd looking code like this appears. But still, it’s an example of wrong thinking about asserts. Among the problems this would cause is that if asserts were ever turned off, you’d have to deal with dead code elimination warnings in static analyzers.

Example #2line of code:

  1.     assert(view.Flush());

The code within assert is supposed to only read values, not change values. In this example, the Flush function changes things. Normally, asserts are only compiled into debug versions of the code, and removed for release versions. However, doing so for Bitcoin will cause the program to behave incorrectly, as things like the Flush() function are no longer called. That’s why they put at the top of this code, to inform people that debug must be left on.

  1. #if defined(NDEBUG)
  2. # error “Bitcoin cannot be compiled without assertions.”
  3. #endif

Example #3: line of code:

  1.     CBlockIndex* pindexNew = new CBlockIndex(block);
  2.     assert(pindexNew);

The new operator never returns NULL, but throws its own exception instead. Not only is this a misconception about what new does, it’s also a misconception about assert. The assert is supposed to check for bad code, not check errors.

Example #4: line of code

  1.     BlockMap::iterator mi = mapBlockIndex.find(inv.hash);
  2.     CBlock block;
  3.     const Consensus::Params& consensusParams = Params().GetConsensus();
  4.     if (!ReadBlockFromDisk(block, (*mi).second, consensusParams))
  5.         assert(!“cannot load block from disk”);

This is the feature that crashed Bitcoin Unlimited, and would also crash main Bitcoin nodes that use the “XTHIN” feature. The problem comes from parsing input (inv.hash). If the parsed input is bad, then the block won’t exist on the disk, and the assert will fail, and the program will crash.

Again, assert is for checking for bad code that leads to impossible conditions, not checking errors in input, or checking errors in system functions.


Conclusion

The above examples were taken from only one file in the Bitcoin Classic source code. They demonstrate the typically wrong ways bad programmers use asserts. It’d be a great example to show students of programming how not to write bad code.

More generally, though, it shows why there’s a difference between 1x and 10x programmers. 1x programmers, like those writing Bitcoin code, make the typical mistake of treating assert() as error checking. The nuance of assert is lost on them.


Updated to reflect that I’m refering to the “Bitcoin Classic” source code, which isn’t the “Bitcoin Core” source code. However, all the problems above appear to also be problems in the Bitcoin Core source code.

In which I have to debunk a second time

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/in-which-i-have-to-debunk-second-time.html

So Slate is doubling-down on their discredited story of a secret Trump server. Tip for journalists: if you are going to argue against an expert debunking your story, try to contact that expert first, so they don’t have to do what I’m going to do here, showing obvious flaws. Also, pay attention to the data.

The experts didn’t find anything

The story claims:

“I spoke with many DNS experts. They found the evidence strongly suggestive of a relationship between the Trump Organization and the bank”.

No, he didn’t. He gave experts limited information and asked them whether it’s consistent with a conspiracy theory. He didn’t ask if it was “suggestive” of the conspiracy theory, or that this was the best theory that fit the data.

This is why “experts” quoted in the press need to go through “media training”, to avoid getting your reputation harmed by bad journalists who try their best to put words in your mouth. You’ll be trained to recognize bad journalists like this, and how not to get sucked into their fabrications.

Jean Camp isn’t an expert

On the other hand, Jean Camp isn’t an expert. I’ve never heard of her before. She gets details wrong. Take for example in this blogpost where she discusses lookups for the domain mail.trump-email.com.moscow.alfaintra.net. She says:

This query is unusual in that is merges two hostnames into one. It makes the most sense as a human error in inserting a new hostname in some dialog window, but neglected to hit the backspace to delete the old hostname.

Uh, no. It’s normal DNS behavior with non-FQDNs. If the lookup for a name fails, computers will try again, pasting the local domain on the end. In other words, when Twitter’s DNS was taken offline by the DDoS attack a couple weeks ago, those monitoring DNS saw a zillion lookups for names like “www.twitter.com.example.com“.

I’ve reproduced this on my desktop by configuring the suffix moscow.alfaintra.net.

I then pinged “mail1.trump-email.com” and captured the packets. As you can see, after the initial lookups fail, Windows tried appending the suffix.

I don’t know what Jean Camp is an expert of, but this is sorta a basic DNS concept. It’s surprising she’d get it wrong. Of course, she may be an expert in DNS who simply had a brain fart (this happens to all of us), but looking across her posts and tweets, she doesn’t seem to be somebody who has a lot of experience with DNS. Sorry for impugning her credibility, but that’s the way the story is written. It demands that we trust the quoted “experts”. 
Call up your own IT department at Slate. Ask your IT nerds if this is how DNS operates. Note: I’m saying your average, unremarkable IT nerds can debunk an “expert” you quote in your story.
Understanding “spam” and “blacklists”

The new article has a paragraph noting that the IP address doesn’t appear on spam blocklists:

Was the server sending spam—unsolicited mail—as opposed to legitimate commercial marketing? There are databases that assiduously and comprehensively catalog spam. I entered the internet protocal address for mail1.trump-email.com to check if it ever showed up in Spamhaus and DNSBL.info. There were no traces of the IP address ever delivering spam.

This is a profound misunderstanding of how these things work.

Colloquially, we call those sending mass marketing emails, like Cendyn, “spammers”. But those running blocklists have a narrower definition. If  emails contain an option to “opt-out” of future emails, then it’s technically not “spam”.

Cendyn is constantly getting added to blocklists when people complain. They spend considerable effort contacting the many organizations maintaining blocklists, proving they do “opt-outs”, and getting “white-listed” instead of “black-listed”. Indeed, the entire spam-blacklisting industry is a bit of scam — getting white-listed often involves a bit of cash.

Those maintaining blacklists only go back a few months. The article is in error saying there’s no record ever of Cendyn sending spam. Instead, if an address comes up clean, it means there’s no record for the past few months. And, if Cendyn is in the white-lists, there would be no record of “spam” at all, anyway.

As somebody who frequently scans the entire Internet, I’m constantly getting on/off blacklists. It’s a real pain. At the moment, my scanner address “209.126.230.71” doesn’t appear to be on any blacklists. Next time a scan kicks off, it’ll probably get added — but only by a few, because most have white-listed it.

There is no IP address limitation

The story repeats the theory, which I already debunked, that the server has a weird configuration that limits who can talk to it:

The scientists theorized that the Trump and Alfa Bank servers had a secretive relationship after testing the behavior of mail1.trump-email.com using sites like Pingability. When they attempted to ping the site, they received the message “521 lvpmta14.lstrk.net does not accept mail from you.”

No, that’s how Listrake (who is the one who actually controls the server) configures all their marketing servers. Anybody can confirm this themselves by ping all the servers in this range:
In case you don’t want to do scans yourself, you can look up on Shodan and see that there’s at least 4000 servers around the Internet who give the same error message.

Again, go back to Chris Davis in your original story ask him about this. He’ll confirm that there’s nothing nefarious or weird going on here, that it’s just how Listrak has decided to configure all it’s spam-sending engines.

Either this conspiracy goes much deeper, with hundreds of servers involved, or this is a meaningless datapoint.
Where did the DNS logs come from?
Tea Leaves and Jean Camp are showing logs of private communications. Where did these logs come from? This information isn’t public. It means somebody has done something like hack into Alfa Bank. Or it means researchers who monitor DNS (for maintaing DNS, and for doing malware research) have broken their NDAs and possibly the law.
The data is incomplete and inconsistent. Those who work for other companies, like Dyn, claim it doesn’t match their own data. We have good reason to doubt these logs. There’s a good chance that the source doesn’t have as comprehensive a view as “Tea Leaves” claim. There’s also a good chance the data has been manipulated.
Specifically, I have as source who claims records for trump-email.com were changed in June, meaning either my source or Tea Leaves is lying.
Until we know more about the source of the data, it’s impossible to believe the conclusions that only Alfa Bank was doing DNS lookups.

By the way, if you are a company like Alfa Bank, and you don’t want the “research” community from seeing leaked intranet DNS requests, then you should probably reconfigure your DNS resolvers. You’ll want to look into RFC7816 “query minimization”, supported by the Unbound and Knot resolvers.

Do the graphs show interesting things?

The original “Tea Leaves” researchers are clearly acting in bad faith. They are trying to twist the data to match their conclusions. For example, in the original article, they claim that peaks in the DNS activity match campaign events. But looking at the graph, it’s clear these are unrelated. It display the common cognitive bias of seeing patterns that aren’t there.
Likewise, they claim that the timing throughout the day matches what you’d expect from humans interacting back and forth between Moscow and New York. No. This is what the activity looks like, graphing the number of queries by hour:
As you can see, there’s no pattern. When workers go home at 5pm in New York City, it’s midnight in Moscow. If humans were involved, you’d expect an eight hour lull during that time. Likewise, when workers arrive at 9am in New York City, you expect a spike in traffic for about an hour until workers in Moscow go home. You see none of that here. What you instead see is a random distribution throughout the day — the sort of distribution you’d expect if this were DNS lookups from incoming spam.
The point is that we know the original “Tea Leaves” researchers aren’t trustworthy, that they’ve convinced themselves of things that just aren’t there.
Does Trump control the server in question?

OMG, this post asks the question, after I’ve debunked the original story, and still gotten the answer wrong.
The answer is that Listrak controls the server. Not even Cendyn controls it, really, they just contract services from Listrak. In other words, not only does Trump not control it, the next level company (Cendyn) also doesn’t control it.
Does Trump control the domain in question?
OMG, this new story continues to make the claim the Trump Organization controls the domain trump-email.com, despite my debunking that Cendyn controls the domain.
Look at the WHOIS info yourself. All the contact info goes to Cendyn. It fits the pattern Cendyn chooses for their campaigns.
  • trump-email.com
  • mjh-email.com
  • denihan-email.com
  • hyatt-email.com
Cendyn even spells “Trump Orgainzation” wrong.

There’s a difference between a “server” and a “name”

The article continues to make trivial technical errors, like confusing what a server is with what a domain name is. For example:

One of the intriguing facts in my original piece was that the Trump server was shut down on Sept. 23, two days after the New York Times made inquiries to Alfa Bank

The server has never been shutdown. Instead, the name “mail1.trump-email.com” was removed from Cendyn’s DNS servers.
It’s impossible to debunk everything in these stories because they garble the technical details so much that it’s impossible to know what the heck they are claiming.
Why did Cendyn change things after Alfa Bank was notified?

It’s a curious coincidence that Cendyn changed their DNS records a couple days after the NYTimes contacted Alfa Bank.
But “coincidence” is all it is. I have years of experience with investigating data breaches. I know that such coincidences abound. There’s always weird coincidence that you are certain are meaningful, but which by the end of the investigation just aren’t.
The biggest source of coincidences is that IT is always changing things and always messing things up. It’s the nature of IT. Thus, you’ll always see a change in IT that matches some other event. Those looking for conspiracies ignore the changes that don’t match, and focus on the one that does, so it looms suspiciously.
As I’ve mentioned before, I have source that says Cendyn changed things around in June. This makes me believe that “Tea Leaves” is editing changes to highlight the one in September.
In any event, many people have noticed that the registrar email “Emily McMullin” has the same last name as Evan McMullin running against Trump in Utah. This supports my point: when you do hacking investigations, you find irrelevant connections all over the freakin’ place.
“Experts stand by their analysis”

This new article states:

I’ve checked back with eight of the nine computer scientists and engineers I consulted for my original story, and they all stood by their fundamental analysis

Well, of course, they don’t want to look like idiots. But notice the subtle rephrasing of the question: the experts stand by their analysis. It doesn’t mean the same thing as standing behind the reporters analysis. The experts made narrow judgements, which even I stand behind as mostly correct, given the data they were given at the time. None of them were asked whether the entire conspiracy theory holds up.
What you should ask is people like Chris Davis or Paul Vixie whether they stand behind my analysis in the past two posts. Or really, ask any expert. I’ve documented things in sufficient clarity. For example, go back to Chris Davis and ask him again about the “limited IP address” theory, and whether it holds up against my scan of that data center above.
Conclusion

Other major news outlets all passed on the story, because even non experts know it’s flawed. The data means nothing. The Slate journalist nonetheless went forward with the story, tricking experts, and finding some non-experts.
But as I’ve shown, given a complete technical analysis, the story falls apart. Most of what’s strange is perfectly normal. The data itself (the DNS logs) are untrustworthy. It builds upon unknown things (like how the mail server rejects IP address) as “unknowable” things that confirm the conspiracy, when they are in fact simply things unknown at the current time, which can become knowable with a little research.

What I show in my first post, and this post, is more data. This data shows context. This data explains the unknowns that Slate present. Moreover, you don’t have to trust me — anybody can replicate my work and see for themselves.


New – Product Support Connection for AWS Marketplace Customers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-product-support-connection-for-aws-marketplace-customers/

There are now over 2,700 software products listed in AWS Marketplace. Tens of thousands of AWS customers routinely find, buy, and start using offerings from more than 925 Independent Software Vendors (ISVs).

Today we are giving you, as a consumer of software through AWS Marketplace, the ability to selectively share your contact information (name, title, phone number, email address, location, and organization) with software vendors in order to simplify and streamline your requests for product support. The vendors can programmatically access this information and store it within their own support systems so that they can easily verify your identity and provide you with better support.

This is an opt-in program. Sellers can choose to participate, and you can choose to share your contact information.

Product Support Connection as a Buyer
In order to test out this feature I launched Barracuda Web Application Firewall (WAF). After selecting my options and clicking on the Accept Software Terms & Launch with 1-click button, I was given the option to share my contact details:

Then I entered my name and other information:

I have the option to enter up to 5 contacts for each subscription at this point. I can also add, change, or delete them later if necessary:

If products that I already own are now enabled for Product Support Connection, I can add contact details here as well.

Product Support Connection as a Seller
If I am an ISV and want to participate in this program, I contact the AWS Marketplace Seller & Catalog Operations Team ([email protected]). The team will enroll me in the program and provide me with access to a secure API that I can use to access contact information. Per the terms of the program, I must register each contact in my support or CRM system within one business day, and use it only for support purposes. To learn more, read AWS Marketplace Product Support Connection Helps Software Vendors Provide More Seamless Product Support on the AWS Partner Network Blog.

Getting Started
When I am searching for products in AWS Marketplace, I can select Product Support Connection as a desired product attribute:

As part of today’s launch, I would like to thank the following vendors who worked with us to shape this program and to add the Product Support Connection to their offerings:

  • Barracuda – Web Application Firewall (WAF), NextGen Firewall F-Series, Load Balancer ADC, Email Security Gateway, Message Archiver.
  • Chef – Chef Server, Chef Compliance.
  • Matillion – Matillion ETL for Redshift.
  • Rogue Wave – OpenLogic Enhanced Support for CentOS (6 & 7, Standard & Security Hardened).
  • SoftNAS – SoftNAS Cloud (Standard & Express).
  • Sophos -Sophos UTM Manager 4, Sophos UTM 9.
  • zData – Greenplum database.
  • Zend – PHP, Zend Server.
  • Zoomdata – Zoomdata.

We’re looking forward to working with other vendors who would like to follow their lead!

To get started, contact the AWS Marketplace Seller & Catalog Operations Team at [email protected].


Jeff;

 

Astro Pi Coding Challenges: a message from Tim Peake

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-coding-challenges-update/

Back in February, we announced an extension to the Astro Pi mission in the form of two coding challenges. The first required you to write Python Sense HAT code to turn Ed and Izzy (the Astro Pi computers) into an MP3 player, so that Tim Peake could plug in his headphones and listen to his music. The second required you to code Sonic Pi music for Tim to listen to via the MP3 player.

Astro_Pi_Logo_WEB-300px

We announced the winners in early April. Since then, we’ve been checking your code on flight-equivalent Astro Pi units and going through the official software delivery and deployment process with the European Space Agency (ESA).

Crew time is heavily regulated on the ISS. However, because no science or experimentation output is required for this, they allowed us to upload it as a crew care package for Tim! We’re very grateful to the UK Space Agency and ESA for letting us extend the Astro Pi project in this way to engage more kids.

The code was uploaded and Tim deployed it onto Ed on May 15. He then recorded this and sent it to us:

Tim Peake with the Astro Pi MP3 player

British ESA astronaut Tim Peake’s message to the students who took part in the 2016 Astro Pi coding challenges to hack his Astro Pi mini-computer, on the International Space Station, into an MP3 player. The music heard is called Run to the Stars composed by one of the teams who took part.

In total, there were four winning MP3 players and four winning Sonic Pi tunes; the audio from the Sonic Pi entries was converted into MP3 format, so that it could be played by the MP3 players. The music heard is called Run to the Stars, composed with Sonic Pi by Iris and Joseph Mitchell, who won the 11 years and under age group.

Tim tested all four MP3 players, listened to all four Sonic Pi tunes, and then went on to load more tunes from his own Spacerocks collection onto the Astro Pi!

Tim said in an email:

As a side note, I’ve also loaded it with some of my Spacerocks music – it works just great. I was dubious about the tilt mechanism working well in microgravity, using the accelerometers to change tracks, but it works brilliantly. I tried inputting motion in other axes to test the stability and it was rock solid – it only worked with the correct motion. Well done to that group!!

“That group” was Lowena Hull from Portsmouth High School, whose MP3 player could change tracks by quickly twisting the Astro Pi to the left or right. Good coding, Lowena!

Thanks again to everyone who took part, to our special judges OMD and Ilan Eshkeri, and especially to Tim Peake, who did this during his time off on a Sunday afternoon last weekend.

The post Astro Pi Coding Challenges: a message from Tim Peake appeared first on Raspberry Pi.

Announcing Sending Authorization!

Post Syndicated from Brian Huang original https://aws.amazon.com/blogs/ses/announcing-sending-authorization/

The Amazon SES team is excited to announce the release of sending authorization! This feature allows users to grant permission to use their email addresses or domains to other accounts or IAM users.

Note that for simplicity, we’ll be referring to email addresses and domains collectively as “identities” and the accounts and IAM users receiving permissions as “delegate senders.”

Why should I use sending authorization?

The primary incentive to use sending authorization is to enable cross-account identity usage with fine-grained permission control. Let’s look at two example use cases.

Say you’ve just been hired to create and manage an email marketing campaign for an online retailer. Until now, in order to send the retailer’s marketing emails under their domain name, you would have had to convince them to allow you to verify their domain under your own AWS account—this would let you send emails using any address under their domain, at any time, and for any purpose, which the retailer might not be comfortable with. You’d also have to work out who would get the bounce/complaint/delivery notifications, which might be additionally confusing because the notifications from your marketing emails would be sent to the same place as the notifications from the transactional emails the retailer is handling.

With sending authorization, however, you can use the retailer’s identity and receive delivery, bounce and complaint notifications while letting them retain sole ownership of it. Identity owners will still be able to monitor usage with delivery, bounce, and complaint notifications and can adjust permissions at any time, and use AWS condition keys to finely control the scope of those permissions.

Imagine instead that you own or administrate for a company that has several disparate teams that all wish to use SES to send emails using a common email address. Until now, you would have had to create and maintain IAM users for each of these teams under the same account (in which case they still would have access to each other’s identities) or verify the same identity under multiple different accounts.

With sending authorization, you can verify the common identity under the single account (perhaps yours) and simply grant the other teams permission to use it. If you still prefer the IAM policy route, you can take advantage of the new condition keys released with sending authorization to tighten up the IAM policies.

Sending authorization is designed to be powerful and flexible. In fact, Amazon WorkMail uses sending authorization to provide an enterprise-level email and calendaring service built on SES.

How does sending authorization work?

Identity owners grant permissions by creating authorization policies. Let’s look at an example. The policy below gives account 9999-9999-9999 permission to use the ses-example.com domain owned by 8888-8888-8888 in SendEmail and SendRawEmail requests as long as the “From” address is [email protected] (with any address tags).

{
  "Id": "SampleAuthorizationPolicy",	
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AuthorizeMarketer",
      "Effect": "Allow",
      "Resource": "arn:aws:ses:us-east-1:888888888888:identity/ses-example.com",
      "Principal": {"AWS": ["999999999999"]},
      "Action": ["SES:SendEmail", "SES:SendRawEmail"],
      "Condition": {
        "StringLike": {
          "ses:FromAddress": "marketing+.*@ses-example.com"
        }
      }
    }
  ]
}

You could write this policy yourself, or you could use the Policy Generator in the SES console, which is even easier. Your Policy Generator page would look like:

Policy generator

Identity owners can add or create a policy for an identity using the PutIdentityPolicy API or the SES console, and can have up to 20 different policies for each identity. You can read more about how to construct and use policies in our developer guide.

How do I make a call with someone else’s identity that I have permission to use?

You’ll specify to SES that you’re using someone else’s identity by presenting an ARN when you make a request. The ARN below refers to an example domain identity (ses-example.com) owned by account 9999-9999-9999 in the US West (Oregon) AWS region.

	arn:aws:ses:us-west-2:999999999999:identity/ses-example.com

Depending on how you make your call, you may need to provide up to three different ARNs: a “source” identity ARN, a “from” identity ARN, and a “return-path” identity ARN. The SendEmail and SendRawEmail APIs have new optional parameters for this purpose, but users of our SMTP endpoint or our SendRawEmail API have the option to instead provide the ARNs as X-headers (X-SES-Source-ARN, X-SES-From-ARN, and X-SES-Return-Path-ARN). See our SendEmail and SendRawEmail documentation for more information about these identities). These headers will be removed by SES before your email is sent.

What happens to notifications when email is sent by a delegate?

Both the identity owner and the delegate sender can set their own bounce, complaint, and delivery notification preferences. SES respects both sets of preferences independently. As a delegate sender, you can configure your notification settings almost as you would if you were the identity owner. The two key differences are that you use ARNs in place of identities, and cannot configure feedback forwarding (a.k.a. receiving bounces and complaints via email) in the console or the API. But, this doesn’t mean that delegate senders cannot use feedback forwarding. If you are a delegate sender and you do want bounces and complaints to be forwarded to an email address you own, just set the “return-path” address of your emails to an identity that you own. You can read more about it in our developer guide.

Billing, sending limits, and reputation

Cross-account emails count against the delegate’s sending limits, so the delegate is responsible for applying for any sending limit increases they might need. Similarly, delegated emails get charged to the delegate’s account, and any bounces and complaints count against the delegate’s reputation.

Sending authorization and IAM policies

It’s important to distinguish between SES sending authorization policies and IAM policies. Although the policies look similar at first glance, sending authorization policies dictate who is allowed to use an SES identity, and IAM policies (set using AWS Identity Access and Management) control what IAM users are allowed to do. The two are independent. Therefore, it’s entirely possible for an IAM user to be unable to use an identity despite having authorization from the owner because the user’s IAM policies do not give permission to use SES (and vice versa). Keep in mind, however, that by default, IAM users with SES access are allowed to use any identities owned by their parent account unless a sending authorization policy explicitly dictates otherwise.

On a related note, with the release of sending authorization, we’re externalizing several new condition keys that you can use in your sending authorization and/or IAM policies:

  • ses:Recipients
  • ses:FromAddress
  • ses:FromDisplayName
  • ses:FeedbackAddress

These can be used to control when policies apply. For example, you might use the “ses:FromAddress” condition key to write an IAM policy that only permits an IAM user to call SES using a certain “From” address. For more information about how to use our new condition keys, see our developer guide.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or here in the comment section of the blog.