AWS discount vouchers? We can do better than that!

Post Syndicated from Brian Huang original http://www.anchor.com.au/blog/2016/07/aws-discount-voucher/

If you’re looking to reduce your AWS spend, read on.

It’s no secret that the range of cloud services offered by providers such as Amazon Web Services brings many benefits to your business. Despite this, complex usage pricing and a huge range of services (around 60 at this stage!) means that it can be challenging controlling your costs.

And while AWS discount vouchers sadly aren’t really a thing – there are other, simple ways to reduce your AWS costs. Many organisations move to the cloud expecting to save money, but depending on how you’ve set things up, you may find you’re spending more than you expected to. While there are certainly a great many steps you can take to cut AWS costs, Anchor have a very simple solution that can save you anywhere from 5% to 40% on your AWS bills quickly and painlessly – even if you wish to continue using On-Demand server instances.

As you’re almost certainly aware, Amazon Web Services has a somewhat complex, usage-based pricing system. A quick scroll down Amazon’s EC2 pricing page or a minute or two with the “AWS Simple Monthly Calculator” tells its own story.

aws cost modesl

To help you get your head around AWS pricing they’ve got a guide called “How AWS Pricing Works” which will help you to understand how your costs are calculated, and give you an understanding of the pricing principles behind the various services – EC2, S3, Elastic Beanstalk, OpsWorks – and so on.

If you’re looking to significantly reduce your costs or get discounts on your AWS spend, then under normal circumstances you need to immerse yourself in the world of Reserved Instances (RIs), carefully considering your EC2 instance families and commit to a 1 or 3 year term while deciding whether you want to pay everything upfront, make a partial upfront payment or continue to pay as you go on a monthly basis.

Comparing breakeven points and exploring the differences between All-Upfront and Partial-Upfront Reservations while taking into account existing Reservations you’ve purchased in the context of your current deployment is a huge challenge – especially from a spreadsheet.

It can take hours (days!) to calculate how many reservations you need – and if you buy the wrong quantity or types of Reserved Instances, you could end up spending more money than you save.

While there is quite clearly some work to do and risks to consider, the savings you make can be significant:

AWS RIs

Alternatively, if you’d like a simple way to reduce your AWS costs, start getting notified when your spend reaches certain thresholds and get better visibility into your AWS environment usage to unearth further opportunities to save money, you should consider Anchor’s new AWS Caretaker service.

Not only will you get an almost instant reduction in your AWS costs care of our volume discounts and cost-optimisation tech, you get vastly improved monitoring and reporting and have the benefit of Anchor’s AWS-certified support team on-call 24×7.

And if you’re interested in buying RIs, we can add a lot of value here too. Our optimisation engine can recommend RI purchases for each instance type, OS and availability zone, the upfront cost of those purchases, and the estimated savings that will result from those purchases if you maintain your usage patterns.

Get in touch with our team if you’d like to learn more. We’ll quickly review your AWS account and provide you with an estimate of the savings we can deliver (and added value we can provide!) with just a few simple steps.

Get in touch and reduce your next AWS invoice!

The post AWS discount vouchers? We can do better than that! appeared first on AWS Managed Services by Anchor.

AWS discount vouchers? We can do better than that!

Post Syndicated from Brian Huang original https://www.anchor.com.au/blog/2016/07/aws-discount-voucher/

If you’re looking to reduce your AWS spend, read on.

It’s no secret that the range of cloud services offered by providers such as Amazon Web Services brings many benefits to your business. Despite this, complex usage pricing and a huge range of services (around 60 at this stage!) means that it can be challenging controlling your costs.

While many organisations move to the cloud expecting to save money, depending on how you’ve set things up you may find you’re spending more than you expected to. And while AWS discount vouchers sadly aren’t really a thing – Anchor has a few simple ways to reduce your AWS costs.

You can get a quick fix with the “AWS Caretaker” service – simply link Anchor as your billing provider and you’ll immediately benefit from our volume discounts. We should be able to reduce your AWS spend by 5-10% almost overnight, plus you’ll enjoy the benefits of vastly improved monitoring and reporting and you’ll have Anchor’s AWS certified technical support teams at your beck and call 24×7. With friendly month-to-month contracts, it’s a no brainer.

Alternatively, Anchor’s AWS Cost Savings Reporting Engine (available to our AWS Cloud Ops customers) provides deep visibility into your AWS environment usage, unearthing further opportunities to save money – up to 40% on your AWS bills. Our optimisation engine can recommend RI purchases for each instance type, OS and availability zone, the upfront cost of those purchases, and the estimated savings that will result from those purchases if you maintain your usage patterns.

aws cost modesl

Under normal circumstances, Amazon Web Services’ complex, usage-based pricing system makes it hard to control your costs. A read through “How AWS Pricing Works“, a quick scroll down Amazon’s EC2 pricing page or a minute or two with the “AWS Simple Monthly Calculator” tells the story.

You need to immerse yourself in the world of Reserved Instances (RIs); carefully considering your EC2 instance families and commit to a 1 or 3 year term while deciding whether you want to pay everything upfront, make a partial upfront payment or continue to pay as you go on a monthly basis.

Comparing breakeven points and exploring the differences between All-Upfront and Partial-Upfront Reservations while taking into account existing Reservations you’ve purchased in the context of your current deployment is a huge challenge – especially from a spreadsheet.

It can take hours (days!) to calculate how many reservations you need – and if you buy the wrong quantity or types of Reserved Instances, you could end up spending more money than you save.

While there is quite clearly some work to do and risks to consider, the savings you make can be significant:

AWS RIs

Get in touch with our team if you’d like to understand how Anchor can help reduce your costs. A quick review of your AWS account and we’ll be able to give you an estimate of the savings we can deliver (and added value we can provide!) in no time.

Get in touch and reduce your next AWS invoice!

The post AWS discount vouchers? We can do better than that! appeared first on AWS Managed Services by Anchor.

Bluetooth LED bulbs

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/43722.html

The best known smart bulb setups (such as the Philips Hue and the Belkin Wemo) are based on Zigbee, a low-energy, low-bandwidth protocol that operates on various unlicensed radio bands. The problem with Zigbee is that basically no home routers or mobile devices have a Zigbee radio, so to communicate with them you need an additional device (usually called a hub or bridge) that can speak Zigbee and also hook up to your existing home network. Requests are sent to the hub (either directly if you’re on the same network, or via some external control server if you’re on a different network) and it sends appropriate Zigbee commands to the bulbs.

But requiring an additional device adds some expense. People have attempted to solve this in a couple of ways. The first is building direct network connectivity into the bulbs, in the form of adding an 802.11 controller. Go through some sort of setup process[1], the bulb joins your network and you can communicate with it happily. Unfortunately adding wifi costs more than adding Zigbee, both in terms of money and power – wifi bulbs consume noticeably more power when “off” than Zigbee ones.

There’s a middle ground. There’s a large number of bulbs available from Amazon advertising themselves as Bluetooth, which is true but slightly misleading. They’re actually implementing Bluetooth Low Energy, which is part of the Bluetooth 4.0 spec. Implementing this requires both OS and hardware support, so older systems are unable to communicate. Android 4.3 devices tend to have all the necessary features, and modern desktop Linux is also fine as long as you have a Bluetooth 4.0 controller.

Bluetooth is intended as a low power communications protocol. Bluetooth Low Energy (or BLE) is even lower than that, running in a similar power range to Zigbee. Most semi-modern phones can speak it, so it seems like a pretty good choice. Obviously you lose the ability to access the device remotely, but given the track record on this sort of thing that’s arguably a benefit. There’s a couple of other downsides – the range is worse than Zigbee (but probably still acceptable for any reasonably sized house or apartment), and only one device can be connected to a given BLE server at any one time. That means that if you have the control app open while you’re near a bulb, nobody else can control that bulb until you disconnect.

The quality of the bulbs varies a great deal. Some of them are pure RGB bulbs and incapable of producing a convincing white at a reasonable intensity[2]. Some have additional white LEDs but don’t support running them at the same time as the colour LEDs, so you have the choice between colour or a fixed (and usually more intense) white. Some allow running the white LEDs at the same time as the RGB ones, which means you can vary the colour temperature of the “white” output.

But while the quality of the bulbs varies, the quality of the apps doesn’t really. They’re typically all dreadful, competing on features like changing bulb colour in time to music rather than on providing a pleasant user experience. And the whole “Only one person can control the lights at a time” thing doesn’t really work so well if you actually live with anyone else. I was dissatisfied.

I’d met Mike Ryan at Kiwicon a couple of years back after watching him demonstrate hacking a BLE skateboard. He offered a couple of good hints for reverse engineering these devices, the first being that Android already does almost everything you need. Hidden in the developer settings is an option marked “Enable Bluetooth HCI snoop log”. Turn that on and all Bluetooth traffic (including BLE) is dumped into /sdcard/btsnoop_hci.log. Turn that on, start the app, make some changes, retrieve the file and check it out using Wireshark. Easy.

Conveniently, BLE is very straightforward when it comes to network protocol. The only thing you have is GATT, the Generic Attribute Protocol. Using this you can read and write multiple characteristics. Each packet is limited to a maximum of 20 bytes. Most implementations use a single characteristic for light control, so it’s then just a matter of staring at the dumped packets until something jumps out at you. A pretty typical implementation is something like:

0x56,r,g,b,0x00,0xf0,0x00,0xaa

where r, g and b are each just a single byte representing the corresponding red, green or blue intensity. 0x56 presumably indicates a “Set the light to these values” command, 0xaa indicates end of command and 0xf0 indicates that it’s a request to set the colour LEDs. Sending 0x0f instead results in the previous byte (0x00 in this example) being interpreted as the intensity of the white LEDs. Unfortunately the bulb I tested that speaks this protocol didn’t allow you to drive the white LEDs at the same time as anything else – setting the selection byte to 0xff didn’t result in both sets of intensities being interpreted at once. Boo.

You can test this out fairly easily using the gatttool app. Run hcitool lescan to look for the device (remember that it won’t show up if anything else is connected to it at the time), then do gatttool -b deviceid -I to get an interactive shell. Type connect to initiate a connection, and once connected send commands by doing char-write-cmd handle value using the handle obtained from your hci dump.

I did this successfully for various bulbs, but annoyingly hit a problem with one from Tikteck. The leading byte of each packet was clearly a counter, but the rest of the packet appeared to be garbage. For reasons best known to themselves, they’ve implemented application-level encryption on top of BLE. This was a shame, because they were easily the best of the bulbs I’d used – the white LEDs work in conjunction with the colour ones once you’re sufficiently close to white, giving you good intensity and letting you modify the colour temperature. That gave me incentive, but figuring out the protocol took quite some time. Earlier this week, I finally cracked it. I’ve put a Python implementation on Github. The idea is to tie it into Ulfire running on a central machine with a Bluetooth controller, making it possible for me to control the lights from multiple different apps simultaneously and also integrating with my Echo.

I’d write something about the encryption, but I honestly don’t know. Large parts of this make no sense to me whatsoever. I haven’t even had any gin in the past two weeks. If anybody can explain how anything that’s being done there makes any sense at all[3] that would be appreciated.

[1] typically via the bulb pretending to be an access point, but also these days through a terrifying hack involving spewing UDP multicast packets of varying lengths in order to broadcast the password to associated but unauthenticated devices and good god the future is terrifying

[2] For a given power input, blue LEDs produce more light than other colours. To get white with RGB LEDs you either need to have more red and green LEDs than blue ones (which costs more), or you need to reduce the intensity of the blue ones (which means your headline intensity is lower). Neither is appealing, so most of these bulbs will just give you a blue “white” if you ask for full red, green and blue

[3] Especially the bit where we calculate something from the username and password and then encrypt that using some random numbers as the key, then send 50% of the random numbers and 50% of the encrypted output to the device, because I can’t even

comment count unavailable comments

Bluetooth LED bulbs

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/43722.html

The best known smart bulb setups (such as the Philips Hue and the Belkin Wemo) are based on Zigbee, a low-energy, low-bandwidth protocol that operates on various unlicensed radio bands. The problem with Zigbee is that basically no home routers or mobile devices have a Zigbee radio, so to communicate with them you need an additional device (usually called a hub or bridge) that can speak Zigbee and also hook up to your existing home network. Requests are sent to the hub (either directly if you’re on the same network, or via some external control server if you’re on a different network) and it sends appropriate Zigbee commands to the bulbs.

But requiring an additional device adds some expense. People have attempted to solve this in a couple of ways. The first is building direct network connectivity into the bulbs, in the form of adding an 802.11 controller. Go through some sort of setup process[1], the bulb joins your network and you can communicate with it happily. Unfortunately adding wifi costs more than adding Zigbee, both in terms of money and power – wifi bulbs consume noticeably more power when “off” than Zigbee ones.

There’s a middle ground. There’s a large number of bulbs available from Amazon advertising themselves as Bluetooth, which is true but slightly misleading. They’re actually implementing Bluetooth Low Energy, which is part of the Bluetooth 4.0 spec. Implementing this requires both OS and hardware support, so older systems are unable to communicate. Android 4.3 devices tend to have all the necessary features, and modern desktop Linux is also fine as long as you have a Bluetooth 4.0 controller.

Bluetooth is intended as a low power communications protocol. Bluetooth Low Energy (or BLE) is even lower than that, running in a similar power range to Zigbee. Most semi-modern phones can speak it, so it seems like a pretty good choice. Obviously you lose the ability to access the device remotely, but given the track record on this sort of thing that’s arguably a benefit. There’s a couple of other downsides – the range is worse than Zigbee (but probably still acceptable for any reasonably sized house or apartment), and only one device can be connected to a given BLE server at any one time. That means that if you have the control app open while you’re near a bulb, nobody else can control that bulb until you disconnect.

The quality of the bulbs varies a great deal. Some of them are pure RGB bulbs and incapable of producing a convincing white at a reasonable intensity[2]. Some have additional white LEDs but don’t support running them at the same time as the colour LEDs, so you have the choice between colour or a fixed (and usually more intense) white. Some allow running the white LEDs at the same time as the RGB ones, which means you can vary the colour temperature of the “white” output.

But while the quality of the bulbs varies, the quality of the apps doesn’t really. They’re typically all dreadful, competing on features like changing bulb colour in time to music rather than on providing a pleasant user experience. And the whole “Only one person can control the lights at a time” thing doesn’t really work so well if you actually live with anyone else. I was dissatisfied.

I’d met Mike Ryan at Kiwicon a couple of years back after watching him demonstrate hacking a BLE skateboard. He offered a couple of good hints for reverse engineering these devices, the first being that Android already does almost everything you need. Hidden in the developer settings is an option marked “Enable Bluetooth HCI snoop log”. Turn that on and all Bluetooth traffic (including BLE) is dumped into /sdcard/btsnoop_hci.log. Turn that on, start the app, make some changes, retrieve the file and check it out using Wireshark. Easy.

Conveniently, BLE is very straightforward when it comes to network protocol. The only thing you have is GATT, the Generic Attribute Protocol. Using this you can read and write multiple characteristics. Each packet is limited to a maximum of 20 bytes. Most implementations use a single characteristic for light control, so it’s then just a matter of staring at the dumped packets until something jumps out at you. A pretty typical implementation is something like:

0x56,r,g,b,0x00,0xf0,0x00,0xaa

where r, g and b are each just a single byte representing the corresponding red, green or blue intensity. 0x56 presumably indicates a “Set the light to these values” command, 0xaa indicates end of command and 0xf0 indicates that it’s a request to set the colour LEDs. Sending 0x0f instead results in the previous byte (0x00 in this example) being interpreted as the intensity of the white LEDs. Unfortunately the bulb I tested that speaks this protocol didn’t allow you to drive the white LEDs at the same time as anything else – setting the selection byte to 0xff didn’t result in both sets of intensities being interpreted at once. Boo.

You can test this out fairly easily using the gatttool app. Run hcitool lescan to look for the device (remember that it won’t show up if anything else is connected to it at the time), then do gatttool -b deviceid -I to get an interactive shell. Type connect to initiate a connection, and once connected send commands by doing char-write-cmd handle value using the handle obtained from your hci dump.

I did this successfully for various bulbs, but annoyingly hit a problem with one from Tikteck. The leading byte of each packet was clearly a counter, but the rest of the packet appeared to be garbage. For reasons best known to themselves, they’ve implemented application-level encryption on top of BLE. This was a shame, because they were easily the best of the bulbs I’d used – the white LEDs work in conjunction with the colour ones once you’re sufficiently close to white, giving you good intensity and letting you modify the colour temperature. That gave me incentive, but figuring out the protocol took quite some time. Earlier this week, I finally cracked it. I’ve put a Python implementation on Github. The idea is to tie it into Ulfire running on a central machine with a Bluetooth controller, making it possible for me to control the lights from multiple different apps simultaneously and also integrating with my Echo.

I’d write something about the encryption, but I honestly don’t know. Large parts of this make no sense to me whatsoever. I haven’t even had any gin in the past two weeks. If anybody can explain how anything that’s being done there makes any sense at all[3] that would be appreciated.

[1] typically via the bulb pretending to be an access point, but also these days through a terrifying hack involving spewing UDP multicast packets of varying lengths in order to broadcast the password to associated but unauthenticated devices and good god the future is terrifying

[2] For a given power input, blue LEDs produce more light than other colours. To get white with RGB LEDs you either need to have more red and green LEDs than blue ones (which costs more), or you need to reduce the intensity of the blue ones (which means your headline intensity is lower). Neither is appealing, so most of these bulbs will just give you a blue “white” if you ask for full red, green and blue

[3] Especially the bit where we calculate something from the username and password and then encrypt that using some random numbers as the key, then send 50% of the random numbers and 50% of the encrypted output to the device, because I can’t even

comment count unavailable comments

DevOps Cafe Episode 68 – Patrick Debois

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2016/7/7/devops-cafe-episode-68-patrick-debois.html

If a serverless server reboots does anyone hear it?

John and Damon welcome back the always insightful Patrick Debois (Small Town Heroes) to get his practitioner point of view of going “serverless”, delivering mobile apps, and the continued evolution of DevOps.  

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Patrick Debois on Twitter: @patrickdebois

Notes:

 

Please leave comments or questions below and we’ll read them on the show!

10 million Android phones infected by all-powerful auto-rooting apps (Ars Technica)

Post Syndicated from jake original http://lwn.net/Articles/693798/rss

Ars Technica reports on the “HummingBad” malware that has infected millions of Android devices: “Researchers from security firm Check Point Software said the malware installs more than 50,000 fraudulent apps each day, displays 20 million malicious advertisements, and generates more than $300,000 per month in revenue. The success is largely the result of the malware’s ability to silently root a large percentage of the phones it infects by exploiting vulnerabilities that remain unfixed in older versions of Android.” The article is based on a report [PDF] from Check Point, though the article notes that “researchers from mobile security company Lookout say HummingBad is in fact Shedun, a family of auto-rooting malware that came to light last November and had already infected a large number of devices“.

Hijacking Someone’s Facebook Account with a Fake Passport Copy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/hijacking_someo.html

BBC has the story. The confusion is that a scan of a passport is much easier to forge than an actual passport. This is a truly hard problem: how do you give people the ability to get back into their accounts after they’ve lost their credentials, while at the same time prohibiting hackers from using the same mechanism to hijack accounts? Demanding an easy-to-forge copy of a hard-to-forge document isn’t a good solution.

How Una Got Her Stolen Laptop Back

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/how-una-found-her-stolen-laptop/

Lost Laptop World Map

Reading Peter’s post on getting your data ready for vacation travels, reminded me of a story we recently received from a Backblaze customer. Una’s laptop was stolen and then traveled the over multiple continents over the next year. Here’s Una’s story, in her own words, on how she got her laptop back. Enjoy.

Pulse Incident Number 10028192
(or: How Playing Computer Games Can Help You In Adulthood)

One day when I was eleven, my father arrived home with an object that looked like a briefcase made out of beige plastic. Upon lifting it, one realized it had the weight of, oh, around two elephants. It was an Ericsson ‘portable’ computer, one of the earliest prototypes of laptop. All my classmates had really cool and fashionable computer game consoles with amazing names like “Atari” and “Commodore”, beautifully vibrant colour displays, and joysticks. Our Ericsson had a display with two colours (orange and … dark orange), it used floppy discs that were actually floppy (remember those?), ran on DOS and had no hard drive (you had to load the operating system every single time you turned on the computer. Took around 10 minutes). I dearly loved this machine, however, and played each of the 6 games on it incessantly. One of these was “Where In The World Is Carmen Sandiego?” an educational game where a detective has to chase an archvillain around the world, using geographical and cultural references as clues to get to the next destination. Fast forward twenty years and…

It’s June 2013, I’m thirty years old, and I still love laptops. I live in Galway, Ireland; I’m a self-employed musician who works in a non-profit music school so the cash is tight, but I’ve splashed out on a Macbook Pro and I LOVE IT. I’m on a flight from Dublin to Dubai with a transfer in Turkey. I talk to the guy next to me, who has an Australian accent and mentions he’s going to Asia to research natural energy. A total hippy, I’m interested; we chat until the convo dwindles, I do some work on my laptop, and then I fall asleep.

At 11pm the plane lands in Turkey and we’re called off to transfer to a different flight. Groggy, I pick up my stuff and stumble down the stairs onto the tarmac. In the half-light beside the plane, in the queue for the bus to the terminal, I suddenly realize that I don’t have my laptop in my bag. Panicking, I immediately seek out the nearest staff member. “Please! I’ve left my laptop on the plane – I have to go back and get it!”

The guy says: “No. It’s not allowed. You must get on the bus, madam. The cabin crew will find it and put it in “Lost and Found” and send it to you.” I protest but I can tell he’s immovable. So I get on the bus, go into the terminal, get on another plane and fly to Dubai. The second I land I ring Turkish Air to confirm they’ve found my laptop. They haven’t. I pretty much stalk Turkish Air for the next two weeks to see if the laptop turns up, but to no avail. I travel back via the same airport (Ataturk International), and go around all three Lost and Found offices in the airport, but my laptop isn’t there amongst the hundreds of Kindles and iPads. I don’t understand.

As time drags on, the laptop doesn’t turn up. I report the theft in my local Garda station. The young Garda on duty is really lovely to me and gives me lots of empathy, but the fact that the laptop was stolen in airspace, in a foreign, non-EU country, does not bode well. I continue to stalk Turkish Airlines; they continue to stonewall me, so I get in touch with the Turkish Department for Consumer Affairs. I find a champion amongst them called Ece, who contacts Turkish Airlines and pleads on my behalf. Unfortunately they seem to have more stone walls in Turkey than there are in the entire of Co. Galway, and his pleas fall on deaf ears. Ece advises me I’ll have to bring Turkish Airlines to court to get any compensation, which I suspect will cost more time and money than the laptop is realistically worth. In a firstworld way, I’m devastated – this object was a massive financial outlay for me, a really valuable tool for my work. I try to appreciate the good things – Ece and the Garda Sharon have done their absolute best to help me, my pal Jerry has loaned me a laptop to tide me over the interim – and then I suck it up, say goodbye to the last of my savings, and buy a new computer.

I start installing the applications and files I need for my business. I subscribe to an online backup service, Backblaze, whereby every time I’m online my files are uploaded to the cloud. I’m logging in to Backblaze to recover all my files when I see a button I’ve never noticed before labelled “Locate My Computer”. I catch a breath. Not even daring to hope, I click on it… and it tells me that Backblaze keeps a record of my computer’s location every time it’s online, and can give me the IP address my laptop has been using to get online. The records show my laptop has been online since the theft!! Not only that, but Backblaze has continued to back up files, so I can see all files the thief has created on my computer. My laptop has last been online in, of all the places, Thailand. And when I look at the new files saved on my computer, I find Word documents about solar power. It all clicks. It was the plane passenger beside me who had stolen my laptop, and he is so clueless he’s continued to use it under my login, not realizing this makes him trackable every time he connects to the internet.

I keep the ‘Locate My Computer” function turned on, so I’m consistently monitoring the thief’s whereabouts, and start the chapter of my life titled “The Sleep Deprivation and The Phonebill”. I try ringing the police service in Thailand (GMT +7 hours) multiple times. To say this is ineffective is an understatement; the language barrier is insurmountable. I contact the Irish embassy in Bangkok – oh, wait, that doesn’t exist. I try a consulate, who is lovely but has very limited powers, and while waiting for them to get back to me I email two Malaysian buddies asking them if they know anyone who can help me navigate the language barrier. I’m just put in touch with this lovely pal-of-a-pal called Tupps who’s going to help me when… I check Backblaze and find out that my laptop had started going online in East Timor. Bye bye, Thailand.

I’m so wrecked trying to communicate with the Thai bureaucracy I decide to play the waiting game for a while. I suspect East Timor will be even more of an international diplomacy challenge, so let’s see if the thief is going to stay there for a while before I attempt a move, right? I check Backblaze around once a week for a month, but then the thief stops all activity – I’m worried. I think he’s realized I can track him and has stopped using my login, or has just thrown the laptop away. Reason kicks in, and I begin to talk myself into stopping my crazy international stalking project. But then, when I least expect it, I strike informational GOLD. In December, the thief checks in for a flight from Bali to Perth and saves his online check-in to the computer desktop. I get his name, address, phone number, and email address, plus flight number and flight time and date.

I have numerous fantasies about my next move. How about I ring up the police in Australia, they immediately believe my story and do my every bidding, and then the thief is met at Arrivals by the police, put into handcuffs and marched immediately to jail? Or maybe I should somehow use the media to tell the truth about this guy’s behaviour and give him a good dose of public humiliation? Should I try my own version of restorative justice, contact the thief directly and appeal to his better nature? Or, the most tempting of all, should I get my Australian-dwelling cousin to call on him and bash his face in? … This last option, to be honest, is the outcome I want the most, but Emmett’s actually on the other side of the Australian continent, so it’s a big ask, not to mention the ever-so-slightly scary consequences for both Emmett and myself if we’re convicted… ! (And, my conscience cries weakly from the depths, it’s just the teensiest bit immoral.) Christmas is nuts, and I’m just so torn and ignorant about course of action to take I … do nothing.

One morning in the grey light of early February I finally decide what to do. Although it’s the longest shot in the history of long shots, I will ring the Australian police force about a laptop belonging to a girl from the other side of the world, which was stolen in airspace, in yet another country in the world. I use Google to figure out the nearest Australian police station to the thief’s address. I set my alarm for 4am Irish time, I ring Rockhampton Station, Queensland, and explain the situation to a lovely lady called Danielle. Danielle is very kind and understanding but, unsurprisingly, doesn’t hold out much hope that they can do anything. I’m not Australian, the crime didn’t happen in Australia, there’s questions of jurisdiction, etc. etc. I follow up, out of sheer irrational compulsion rather than with the real hope of an answer, with an email 6 weeks later. There’s no response. I finally admit to myself the laptop is gone. Ever since he’s gone to Australia the thief has copped on and stopped using my login, anyway. I unsubscribe my stolen laptop from Backblaze and try to console myself with the thought that at least I did my best.

And then, completely out of the blue, on May 28th 2014, I get an email from a Senior Constable called Kain Brown. Kain tells me that he has executed a search warrant at a residence in Rockhampton and has my laptop!! He has found it!!! I am stunned. He quickly gets to brass tacks and explains my two options: I can press charges, but it’s extremely unlikely to result in a conviction, and even if it did, the thief would probably only be charged with a $200 fine – and in this situation, it could take years to get my laptop back. If I don’t press charges, the laptop will be kept for 3 months as unclaimed property, and then returned to me. It’s a no-brainer; I decide not to press charges. I wait, and wait, and three months later, on the 22nd September 2014, I get an email from Kain telling me that he can finally release the laptop to me.

Naively, I think my tale is at the “Happy Ever After” stage. I dance a jig around the kitchen table, and read my subsequent email from a “Property Officer” of Rockhampton Station, John Broszat. He has researched how to send the laptop back to me … and my jig is suddenly halted. My particular model of laptop has a lithium battery built into the casing which can only be removed by an expert, and it’s illegal to transport a lithium battery by air freight. So the only option for getting the laptop back, whole and functioning, is via “Sea Mail” – which takes three to four months to get to Ireland. This blows my mind. I can’t quite believe that in this day and age, we can send people to space, a media file across the world in an instant, but that transporting a physical object from one side of the globe to another still takes … a third of a year! It’s been almost a year and a half since my laptop was stolen. I shudder to think of what will happen on its final journey via Sea Mail – knowing my luck, the ship will probably be blown off course and it’ll arrive in the Bahamas.

Fortunately, John is empathetic, and willing to think outside the box. Do I know anyone who will be travelling from Australia to Ireland via plane who would take my laptop in their hand luggage? Well, there’s one tiny silver lining to the recession: half of Craughwell village has a child living in Australia. I ask around on Facebook and find out that my neighbour’s daughter is living in Australia and coming home for Christmas. John Broszat is wonderfully cooperative and mails my laptop to Maroubra Police Station for collection by the gorgeous Laura Gibbons. Laura collects it and brings it home in her flight hand luggage, and finally, FINALLY, on the 23rd of December 2014, 19 months after it’s been stolen, I get my hands on my precious laptop again.

I gingerly take the laptop out of the fashionable paper carrier bag in which Laura has transported it. I set the laptop on the table, and examine it. The casing is slightly more dented than it was, but except for that it’s in one piece. Hoping against hope, I open up the screen, press the ‘on’ button and… the lights flash and the computer turns on!!! The casing is dented, there’s a couple of insalubrious pictures on the hard drive I won’t mention, but it has been dragged from Turkey to Thailand to East Timor to Indonesia to Australia, and IT STILL WORKS. It even still has the original charger accompanying it. Still in shock that this machine is on, I begin to go through the hard drive. Of course, it’s radically different – the thief has deleted all my files, changed the display picture, downloaded his own files and applications. I’m curious: What sort of person steals other people’s laptops? How do they think, organize their lives, what’s going through their minds? I’ve seen most of the thief’s files before from stalking him via the Backblaze back-up service, and they’re not particularly interesting or informative about the guy on a personal level. But then I see a file I haven’t seen before, “ free ebook.pdf ”. I click on it, and it opens. I shake my head in disbelief. The one new file that the thief has downloaded onto my computer is the book “How To Win Friends And Influence People”.

A few weeks later, a new friend and I kiss for the first time. He’s a graphic designer from London. Five months later, he moves over to Ireland to be with me. We’re talking about what stuff he needs to bring when he’s moving and he says “I’m really worried; my desktop computer is huge. I mean, I have no idea how I’m going to bring it over.” Smiling, I say “I have a spare laptop that might suit you…”

[Editor: The moral of the story is make sure your data is backed up before you go on vacation.]

The post How Una Got Her Stolen Laptop Back appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Thursday’s security advisories

Post Syndicated from jake original http://lwn.net/Articles/693722/rss

Debian has updated horizon (two
vulnerabilities, one from 2015).

openSUSE has updated ImageMagick
(13.2: many vulnerabilities, lots from 2014 and 2015) and qemu (42.1: many vulnerabilities, lots from 2015).

Scientific Linux has updated ocaml (SL7: information leak from 2015).

Ubuntu has updated tomcat8
(16.04: denial of service).
In addition, Ubuntu has announced the end of
life for 15.10
on July 28 and the end of
life for 14.04.x hardware-enablement (HWE) stacks
on August 4.

The Difficulty of Routing around Internet Surveillance States

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_difficulty_.html

Interesting research: “Characterizing and Avoiding Routing Detours Through Surveillance States,” by Anne Edmundson, Roya Ensafi, Nick Feamster, and Jennifer Rexford.

Abstract: An increasing number of countries are passing laws that facilitate the mass surveillance of Internet traffic. In response, governments and citizens are increasingly paying attention to the countries that their Internet traffic traverses. In some cases, countries are taking extreme steps, such as building new Internet Exchange Points (IXPs), which allow networks to interconnect directly, and encouraging local interconnection to keep local traffic local. We find that although many of these efforts are extensive, they are often futile, due to the inherent lack of hosting and route diversity for many popular sites. By measuring the country-level paths to popular domains, we characterize transnational routing detours. We find that traffic is traversing known surveillance states, even when the traffic originates and ends in a country that does not conduct mass surveillance. Then, we investigate how clients can use overlay network relays and the open DNS resolver infrastructure to prevent their traffic from traversing certain jurisdictions. We find that 84% of paths originating in Brazil traverse the United States, but when relays are used for country avoidance, only 37% of Brazilian paths traverse the United States. Using the open DNS resolver infrastructure allows Kenyan clients to avoid the United States on 17% more paths. Unfortunately, we find that some of the more prominent surveillance states (e.g., the U.S.) are also some of the least avoidable countries.

AWS Marketplace Update – Support for ISVs Based in the EU

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-marketplace-update-support-for-isvs-based-in-the-eu/

AWS Marketplace allows AWS customers to find, buy, and immediately start using cloud-based applications developed by Independent Software Vendors (ISVs).  AWS customers collectively rack up 205 million hours per month of AWS Marketplace usage as they make use of over 2,700 offerings from over 925 ISVs.

Support for EU-Based ISVs
ISVs based in the European Union can now register their products in AWS Marketplace without having to create a US-based entity.

The following EU-based ISVs have already listed their products:

BI/Database

HPC/Storage

Security/Monitoring

Media/Communications

Business Apps

To learn more about their offerings, check our our new Software Solutions from European ISVs page!

Come on In
If you are a US or EU-based ISV and would like to list and sell your products in AWS Marketplace, visit our Sell on AWS Marketplace page.


Jeff;

 

PS – Other recent feature additions to AWS Marketplace include Support for Clusters and AWS Resources and Additional Pricing Options for Sellers. Also, AWS customers can now request multi-year subscriptions to select products in AWS Marketplace at a negotiated discount from the software vendor (discounts on multi-year subscriptions vary by product and vendor). For more information on the eligible products and vendors, please contact us at [email protected].

 

 

 

Watch the AWS Summit – Santa Clara Keynote in Real Time on July 13

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1UMV1L79BHDWJ/Watch-the-AWS-Summit-Santa-Clara-Keynote-in-Real-Time-on-July-13

Join us online Wednesday, July 13, at 10:00 A.M. Pacific Time for the AWS Summit – Santa Clara Livestream! This keynote presentation, given by Dr. Matt Wood, AWS General Manager of Product Strategy, will highlight the newest AWS features and services, and select customer stories. Don’t miss this live presentation!

Join us in person at the Santa Clara Convention Center
If you are in the Santa Clara area and would like to attend the free Summit, you still have time. Register now to attend.

The Summit includes:

  • More than 50 technical sessions, including these security-related sessions:

    • Automating Security Operations in AWS (Deep Dive)
    • Securing Cloud Workloads with DevOps Automation
    • Deep Dive on AWS IoT
    • Getting Started with AWS Security (Intro)
    • Network Security and Access Control within AWS (Intro)
  • Training opportunities in Hands-on Labs.
  • Full-day training bootcamps. Registration is $600.
  • The opportunity to learn best practices and get questions answered from AWS engineers, expert customers, and partners.
  • Networking opportunities with your cloud and IT peers.

– Craig 

P.S. Can’t make the Santa Clara event? Check out our other AWS Summit locations. If you have summit questions, please contact us at [email protected]

Hot Startups on AWS – June 2016 – Shaadi.com, Capillary, Mondo

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/hot-startups-on-aws-june-2016-shaadi-com-capillary-mondo/

Continuing with our focus on hot AWS-powered startups (March and April), I would like to tell you about three more this month:

  • Shaadi.com – Helping South Asians to find a companion for life.
  • Capillary – Boosting customer engagement for e-commerce.
  • Mondo – A mobile-first bank.

Shaadi.com
Anupam Mittal, founder of Shaadi.com,  was exasperated by the way that marriages were arranged in India. Candidate photos and profiles were spread out on a coffee table and perused in hopes of finding a suitable life partner. He believed that this important, tradition-bound process could be improved, and created Shaadi.com, now one of the world’s largest matchmaking services and one of India’s best-known Internet brands.

Shaadi.com blends time-honored traditions (many going back centuries) with a progressive, consumer-oriented mindset. After having touched the lives of over 35 million people and helping over 4 million people to find their matches, they were recognized as one of the 50 most innovative companies in the world back in 2011.

In order to build a scalable business, Shaadi now runs its production infrastructure on AWS with the assistance of a lean DevOps team. They currently make use of Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Email Service (SES), and Amazon ElastiCache, with plans to make use of additional managed services in the future. They host their corporate data warehouse on Amazon Redshift and use it to drive all of their reporting and analytics.

Capillary
Back in 2008, the founders of Capillary began to work with retailers to create loyalty programs that were centered around mobile phone numbers. As they did this, they realized that many of the retailers were saddled with traditional, on-premises CRM systems that were not amenable to modernization. The founders stepped in to fill this gap with the goal of creating a cutting-edge customer engagement suite that ran on a multi-tenant cloud-powered platform.

Over the intervening years the solution has grown to encompass CRM, loyalty, e-commerce, customer analytics, and O2O (online-to-offline) commerce. Capillary now connects 150 million shoppers to over 20,000 stores and more than 250 e-commerce implementations across 30 countries, with a focus on driving excellence in online and traditional retail.

Capillary now runs in 5 distinct AWS regions. The architecture is based on microservices and runs on top of EC2, S3, Auto Scaling, Amazon EMR, and ElastiCache, with a focus on security, availability, and scalability (read the Capillary Tech Blog to learn more about how they address these requirements using AWS).

Mondo
Starting with the goal of “building the best bank on the planet,” the team behind UK-based Mondo decided to address the needs of mobile-first users. These users prefer to do their banking via mobile phone instead of in person or on a desktop.  In addition to traditional banking functions, the resulting mobile app can track spending in real time, display geolocated transactions on a map, view spending by category, send money to other users in peer-to-peer fashion, and interact with loyalty programs.  Behind the scenes, the app makes uses of the Mondo API to interact with the actual banking functions.

The founders chose AWS in order to build a scalable and highly reliable system. They practice account separation (distinct accounts for dev, test, staging, and production) and follow the infrastructure-as-code discipline. Because banking is a regulated business, Mondo uses AWS CloudHSM to sign and cryptographically ensure the integrity of payment messages. They use VPCs and Network ACLs to isolate disparate functions and to manage the scope of regulated activities.


Jeff;

How to Centrally Manage AWS Config Rules across Multiple AWS Accounts

Post Syndicated from Chayan Biswas original https://aws.amazon.com/blogs/devops/how-to-centrally-manage-aws-config-rules-across-multiple-aws-accounts/

AWS Config Rules allow you to codify policies and best practices for your organization and evaluate configuration changes to AWS resources against these policies. If you manage multiple AWS accounts, you might want to centrally govern and define these policies for all of the AWS accounts in your organization. With appropriate authorization, you can create a Config rule in one account that uses an AWS Lambda function owned by another account. Such a setup allows you to maintain a single copy of the Lambda function. You do not have to duplicate source code across accounts.

In this post, I will show you how to create Config rules with appropriate cross-account Lambda function authorization. I’ll use a central account that I refer to as the admin-account to create a Lambda function. All of the other accounts then point to the Lambda function owned by the admin-account to create a Config rule. Let’s call one of these accounts the managed-account. This setup allows you to maintain tight control over the source code and eliminates the need to create a copy of Lambda functions in all of the accounts. You no longer have to deploy updates to the Lambda function in these individual accounts.

We will complete these steps for the setup:

  1. Create a Lambda function for a cross-account Config rule in the admin-account.
  2. Authorize Config Rules in the managed-account to invoke a Lambda function in the admin-account.
  3. Create an IAM role in the managed-account to pass to the Lambda function.
  4. Add a policy and trust relationship to the IAM role in the managed-account.
  5. Pass the IAM role from the managed-account to the Lambda function.

Step 1: Create a Lambda Function for a Cross-Account Config Rule

Let’s first create a Lambda function in the admin-account. In this example, the Lambda function checks if log file validation is enabled for all of the AWS CloudTrail trails. Enabling log file validation helps you determine whether a log file was modified or deleted after CloudTrail delivered it. For more information about CloudTrail log file validation, see Validating CloudTrail Log File Integrity.

Note: This rule is an example only. You do not need to create this specific rule to set up cross-account Config rules. You can apply the concept illustrated here to any new or existing Config rule.

To get started, in the AWS Lambda console, choose the config-rule-change-triggered blueprint.

 

Next, modify the evaluateCompliance function and the handler invoked by Lambda. Leave the rest of the blueprint code as is.

function evaluateCompliance(configurationItem, ruleParameters) {
    checkDefined(configurationItem, 'configurationItem');
    checkDefined(configurationItem.configuration, 'configurationItem.configuration');
    checkDefined(ruleParameters, 'ruleParameters');
    //Check if the resource is of type CloudTrail
    if ('AWS::CloudTrail::Trail' !== configurationItem.resourceType) {
        return 'NOT_APPLICABLE';
    }
    //If logfileValidation is enabled, then the trail is compliant
    else if (configurationItem.configuration.logFileValidationEnabled) {
        return 'COMPLIANT';
    }
    else {
        return 'NON_COMPLIANT';
    }
}

In this code snippet, we first ensure that the evaluation is being performed for a trail. Then we check whether the LogFileValidationEnabled property of the trail is set to true. If log file validation is enabled, the trail is marked compliant. Otherwise, the trail is marked noncompliant.

Because this Lambda function is created for reporting evaluation results in the managed-account, the Lambda function will need to be able to call the PutEvaluations Config API (and other APIs, if needed) on the managed-account. We’ll pass the ARN of an IAM role in the managed-account to this Lambda function as a rule parameter. We will need to add a few lines of code to the Lambda function’s handler in order to assume the IAM role passed on by the Config rule in the managed-account:

exports.handler = (event, context, callback) => {
    event = checkDefined(event, 'event');
    const invokingEvent = JSON.parse(event.invokingEvent);
    const ruleParameters = JSON.parse(event.ruleParameters);
    const configurationItem = checkDefined(invokingEvent.configurationItem, 'invokingEvent.configurationItem');
    let compliance = 'NOT_APPLICABLE';
    const putEvaluationsRequest = {}; 
    if (isApplicable(invokingEvent.configurationItem, event)) {
        // Invoke the compliance checking function.
        compliance = evaluateCompliance(invokingEvent.configurationItem, ruleParameters);
    } 
    // Put together the request that reports the evaluation status
    // Note that we're choosing to report this evaluation against the resource that was passed in.
    // You can choose to report this against any other resource type supported by Config 

    putEvaluationsRequest.Evaluations = [{
        ComplianceResourceType: configurationItem.resourceType,
        ComplianceResourceId: configurationItem.resourceId,
        ComplianceType: compliance,
        OrderingTimestamp: configurationItem.configurationItemCaptureTime
    }];
    putEvaluationsRequest.ResultToken = event.resultToken;
    // Assume the role passed from the managed-account
    aws.config.credentials = new aws.TemporaryCredentials({RoleArn: ruleParameters.executionRole});
    let config = new aws.ConfigService({});
    // Invoke the Config API to report the result of the evaluation
    config.putEvaluations(putEvaluationsRequest, callback);
};

In this code snippet, the ARN of the IAM role in the managed-account is passed to this Lambda function as a rule parameter called executionRole. The highlighted lines of code are used to assume the role in the managed-account. Finally, we select the appropriate role (in the admin-account) and save the function.

Make a note of the IAM role in admin-account assigned to the Lambda function and the ARN of the Lambda function. We’ll need to refer these later. You can find the ARN of the Lambda function in the upper-right corner of the AWS Lambda console.

Step 2: Authorize Config Rules in Other Accounts to Invoke a Lambda Function in Your Account

Because the Lambda function we just created will be invoked by the managed-account, we need to add resource policies to allow the managed-account to perform this action. Resource policies to Lambda functions can be applied only through the AWS CLI or SDKs.

Here’s a CLI command you can use to add the resource policy for the managed-account:

$ aws lambda add-permission 
  --function-name cloudtrailLogValidationEnabled 
  --region <region> 
  --statement-id <id> 
  --action "lambda:InvokeFunction" 
  --principal config.amazonaws.com 
  --source-account <managed-account> 

This statement allows only the principal config.amazonaws.com (AWS Config) for the specified source-account to perform the InvokeFunction action on AWS Lambda functions. If more than one account will invoke the Lambda function, each account must be authorized.

Step 3: Create an IAM Role to Pass to the Lambda Function

Next, we need to create an IAM role in the managed-account that can be assumed by the Lambda function. If you want to use an existing role, you can skip to step 4.

Sign in to the AWS IAM console of one of the managed-accounts. In the left navigation, choose Roles, and then choose Create New Role.

On the Set Role Name page, type a name for the role:

Because we are creating this role for cross-account access between the AWS accounts we own, on the Select Role Type page, select Role for Cross-Account Access:

After we choose this option, we must type the account number of the account to which we want to allow access. In our case, we will type the account number of the admin-account.

After we complete this step, we can attach policies to the role. We will skip this step for now. Choose Next Step to review and create the role.

Step 4: Add Policy and Trust Relationships to the IAM Role

From the IAM console of the managed-account, choose the IAM role that the Lambda function will assume, and then click it to modify the role:

We now see options to modify permissions and trust relationships. This IAM role must have, at minimum, permission to call the PutEvaluations Config API in the managed-account. You can attach an existing managed policy or create an inline policy to grant permission to the role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "config:PutEvaluations"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

This policy only allows PutEvaluations action on AWS Config service. You might want to extend permission for the role to perform other actions, depending on the evaluation logic you implement in the Lambda function.

We also need to ensure that the trust relationship is set up correctly. If you followed the steps in this post to create the role, you will see the admin-account has already been added as a trusted entity. This trust policy allows any entity in the admin-account to assume the role.

You can edit the trust relationship to restrict permission only to the role in the admin-account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<admin-account>:role/lambda_config_role"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Here, lambda_config_role is the role we assigned to the Lambda function we created in the admin-account.

Step 5: Pass the IAM Role to the Lambda Function

The last step involves creating a custom rule in the managed-account. In the AWS Config console of the managed-account, follow the steps to create a custom Config rule. On the rule creation page, we will provide a name and description and paste the ARN of the Lambda function we created in the admin-account:

Because we want this rule to be triggered upon changes to CloudTrail trails, for Trigger type, select Configuration changes. For Scope of changes, select Resources. For Resources, add CloudTrail:Trail. Finally, add the executionRole rule parameter and paste the ARN of the IAM role: arn:aws:iam::<managed-account>:role/config-rule-admin.

 

Save your changes and then create the rule. After the rule is evaluated, inspect the results:

In this example, there are two CloudTrail trails, one of which is noncompliant. Upon further inspection, we find that the noncompliant trail does not have log file validation enabled:

After we enable log file validation, the rule will be evaluated again and the trail will be marked compliant.

In case you are managing multiple AWS accounts, you may want an easy way to create the Config rule and IAM role in all the accounts in your organization. This can be achieved by using the AWS CloudFormation template I have provided here. Before using this CloudFormation template, replace the admin-account placeholder with the account number of the AWS account you plan to use for centrally managing the Lambda function. Once the Config rule and IAM role are set up in all the managed accounts, you can simply modify the Lambda function in the admin-account to add further checks.

Conclusion

In this blog post, I showed how you can create AWS Config Rules that use Lambda functions with cross-account authorization. This setup allows you to centrally manage the Config rules and associated Lambda functions and retain control over the source code. As an alternative to this approach, you can use a CloudFormation template to create and update Config rules and associated Lambda functions in the managed accounts. The cross-account authorization we set up for the Lambda function in this blog post can also be extended to perform actions beyond reporting evaluation results. To do this, you need to add permission for the relevant APIs in the managed accounts.

We welcome your feedback! Leave comments in the section below or contact us on the AWS Config forum.

How to Centrally Manage AWS Config Rules across Multiple AWS Accounts

Post Syndicated from Chayan Biswas original http://blogs.aws.amazon.com/application-management/post/Tx23LIUFRTWOHNB/How-to-Centrally-Manage-AWS-Config-Rules-across-Multiple-AWS-Accounts

AWS Config Rules allow you to codify policies and best practices for your organization and evaluate configuration changes to AWS resources against these policies. If you manage multiple AWS accounts, you might want to centrally govern and define these policies for all of the AWS accounts in your organization. With appropriate authorization, you can create a Config rule in one account that uses an AWS Lambda function owned by another account. Such a setup allows you to maintain a single copy of the Lambda function. You do not have to duplicate source code across accounts.

In this post, I will show you how to create Config rules with appropriate cross-account Lambda function authorization. I’ll use a central account that I refer to as the admin-account to create a Lambda function. All of the other accounts then point to the Lambda function owned by the admin-account to create a Config rule. Let’s call one of these accounts the managed-account. This setup allows you to maintain tight control over the source code and eliminates the need to create a copy of Lambda functions in all of the accounts. You no longer have to deploy updates to the Lambda function in these individual accounts.

We will complete these steps for the setup:

  1. Create a Lambda function for a cross-account Config rule in the admin-account.
  2. Authorize Config Rules in the managed-account to invoke a Lambda function in the admin-account.
  3. Create an IAM role in the managed-account to pass to the Lambda function.
  4. Add a policy and trust relationship to the IAM role in the managed-account.
  5. Pass the IAM role from the managed-account to the Lambda function.

Step 1: Create a Lambda Function for a Cross-Account Config Rule

Let’s first create a Lambda function in the admin-account. In this example, the Lambda function checks if log file validation is enabled for all of the AWS CloudTrail trails. Enabling log file validation helps you determine whether a log file was modified or deleted after CloudTrail delivered it. For more information about CloudTrail log file validation, see Validating CloudTrail Log File Integrity.

Note: This rule is an example only. You do not need to create this specific rule to set up cross-account Config rules. You can apply the concept illustrated here to any new or existing Config rule.

To get started, in the AWS Lambda console, choose the config-rule-change-triggered blueprint.

 

Next, modify the evaluateCompliance function and the handler invoked by Lambda. Leave the rest of the blueprint code as is.

function evaluateCompliance(configurationItem, ruleParameters) {
    checkDefined(configurationItem, 'configurationItem');
    checkDefined(configurationItem.configuration, 'configurationItem.configuration');
    checkDefined(ruleParameters, 'ruleParameters');
    //Check if the resource is of type CloudTrail
    if ('AWS::CloudTrail::Trail' !== configurationItem.resourceType) {
        return 'NOT_APPLICABLE';
    }
    //If logfileValidation is enabled, then the trail is compliant
    else if (configurationItem.configuration.logFileValidationEnabled) {
        return 'COMPLIANT';
    }
    else {
        return 'NON_COMPLIANT';
    }
}

In this code snippet, we first ensure that the evaluation is being performed for a trail. Then we check whether the LogFileValidationEnabled property of the trail is set to true. If log file validation is enabled, the trail is marked compliant. Otherwise, the trail is marked noncompliant.

Because this Lambda function is created for reporting evaluation results in the managed-account, the Lambda function will need to be able to call the PutEvaluations Config API (and other APIs, if needed) on the managed-account. We’ll pass the ARN of an IAM role in the managed-account to this Lambda function as a rule parameter. We will need to add a few lines of code to the Lambda function’s handler in order to assume the IAM role passed on by the Config rule in the managed-account:

exports.handler = (event, context, callback) => {
    event = checkDefined(event, 'event');
    const invokingEvent = JSON.parse(event.invokingEvent);
    const ruleParameters = JSON.parse(event.ruleParameters);
    const configurationItem = checkDefined(invokingEvent.configurationItem, 'invokingEvent.configurationItem');
    let compliance = 'NOT_APPLICABLE';
    const putEvaluationsRequest = {}; 
    if (isApplicable(invokingEvent.configurationItem, event)) {
        // Invoke the compliance checking function.
        compliance = evaluateCompliance(invokingEvent.configurationItem, ruleParameters);
    } 
    // Put together the request that reports the evaluation status
    // Note that we're choosing to report this evaluation against the resource that was passed in.
    // You can choose to report this against any other resource type supported by Config 

    putEvaluationsRequest.Evaluations = [{
        ComplianceResourceType: configurationItem.resourceType,
        ComplianceResourceId: configurationItem.resourceId,
        ComplianceType: compliance,
        OrderingTimestamp: configurationItem.configurationItemCaptureTime
    }];
    putEvaluationsRequest.ResultToken = event.resultToken;
    // Assume the role passed from the managed-account
    aws.config.credentials = new aws.TemporaryCredentials({RoleArn: ruleParameters.executionRole});
    let config = new aws.ConfigService({});
    // Invoke the Config API to report the result of the evaluation
    config.putEvaluations(putEvaluationsRequest, callback);
};

In this code snippet, the ARN of the IAM role in the managed-account is passed to this Lambda function as a rule parameter called executionRole. The highlighted lines of code are used to assume the role in the managed-account. Finally, we select the appropriate role (in the admin-account) and save the function.


Make a note of the IAM role in admin-account assigned to the Lambda function and the ARN of the Lambda function. We’ll need to refer these later. You can find the ARN of the Lambda function in the upper-right corner of the AWS Lambda console.

Step 2: Authorize Config Rules in Other Accounts to Invoke a Lambda Function in Your Account

Because the Lambda function we just created will be invoked by the managed-account, we need to add resource policies to allow the managed-account to perform this action. Resource policies to Lambda functions can be applied only through the AWS CLI or SDKs.

Here’s a CLI command you can use to add the resource policy for the managed-account:

$ aws lambda add-permission \
  --function-name cloudtrailLogValidationEnabled \
  --region <region> \
  --statement-id <id> \
  --action "lambda:InvokeFunction" \
  --principal config.amazonaws.com \
  --source-account <managed-account> \

This statement allows only the principal config.amazonaws.com (AWS Config) for the specified source-account to perform the InvokeFunction action on AWS Lambda functions. If more than one account will invoke the Lambda function, each account must be authorized.

Step 3: Create an IAM Role to Pass to the Lambda Function

Next, we need to create an IAM role in the managed-account that can be assumed by the Lambda function. If you want to use an existing role, you can skip to step 4.

Sign in to the AWS IAM console of one of the managed-accounts. In the left navigation, choose Roles, and then choose Create New Role.

On the Set Role Name page, type a name for the role:

Because we are creating this role for cross-account access between the AWS accounts we own, on the Select Role Type page, select Role for Cross-Account Access:

After we choose this option, we must type the account number of the account to which we want to allow access. In our case, we will type the account number of the admin-account.

After we complete this step, we can attach policies to the role. We will skip this step for now. Choose Next Step to review and create the role.

Step 4: Add Policy and Trust Relationships to the IAM Role

From the IAM console of the managed-account, choose the IAM role that the Lambda function will assume, and then click it to modify the role:

We now see options to modify permissions and trust relationships. This IAM role must have, at minimum, permission to call the PutEvaluations Config API in the managed-account. You can attach an existing managed policy or create an inline policy to grant permission to the role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "config:PutEvaluations"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

This policy only allows PutEvaluations action on AWS Config service. You might want to extend permission for the role to perform other actions, depending on the evaluation logic you implement in the Lambda function.

We also need to ensure that the trust relationship is set up correctly. If you followed the steps in this post to create the role, you will see the admin-account has already been added as a trusted entity. This trust policy allows any entity in the admin-account to assume the role.

You can edit the trust relationship to restrict permission only to the role in the admin-account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<admin-account>:role/lambda_config_role"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Here, lambda_config_role is the role we assigned to the Lambda function we created in the admin-account.

Step 5: Pass the IAM Role to the Lambda Function

The last step involves creating a custom rule in the managed-account. In the AWS Config console of the managed-account, follow the steps to create a custom Config rule. On the rule creation page, we will provide a name and description and paste the ARN of the Lambda function we created in the admin-account:

Because we want this rule to be triggered upon changes to CloudTrail trails, for Trigger type, select Configuration changes. For Scope of changes, select Resources. For Resources, add CloudTrail:Trail. Finally, add the executionRole rule parameter and paste the ARN of the IAM role: arn:aws:iam::<managed-account>:role/config-rule-admin.

 

Save your changes and then create the rule. After the rule is evaluated, inspect the results:

In this example, there are two CloudTrail trails, one of which is noncompliant. Upon further inspection, we find that the noncompliant trail does not have log file validation enabled:

After we enable log file validation, the rule will be evaluated again and the trail will be marked compliant.

In case you are managing multiple AWS accounts, you may want an easy way to create the Config rule and IAM role in all the accounts in your organization. This can be achieved by using the AWS CloudFormation template I have provided here. Before using this CloudFormation template, replace the admin-account placeholder with the account number of the AWS account you plan to use for centrally managing the Lambda function. Once the Config rule and IAM role are set up in all the managed accounts, you can simply modify the Lambda function in the admin-account to add further checks.

Conclusion

In this blog post, I showed how you can create AWS Config Rules that use Lambda functions with cross-account authorization. This setup allows you to centrally manage the Config rules and associated Lambda functions and retain control over the source code. As an alternative to this approach, you can use a CloudFormation template to create and update Config rules and associated Lambda functions in the managed accounts. The cross-account authorization we set up for the Lambda function in this blog post can also be extended to perform actions beyond reporting evaluation results. To do this, you need to add permission for the relevant APIs in the managed accounts.

We welcome your feedback! Leave comments in the section below or contact us on the AWS Config forum.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close