Tag Archives: plants

Tackling climate change and helping the community

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/fair-haven-weather-station/

In today’s guest post, seventh-grade students Evan Callas, Will Ross, Tyler Fallon, and Kyle Fugate share their story of using the Raspberry Pi Oracle Weather Station in their Innovation Lab class, headed by Raspberry Pi Certified Educator Chris Aviles.

Raspberry Pi Certified Educator Chris Aviles Innovation Lab Oracle Weather Station

United Nations Sustainable Goals

The past couple of weeks in our Innovation Lab class, our teacher, Mr Aviles, has challenged us students to design a project that helps solve one of the United Nations Sustainable Goals. We chose Climate Action. Innovation Lab is a class that gives students the opportunity to learn about where the crossroads of technology, the environment, and entrepreneurship meet. Everyone takes their own paths in innovation and learns about the environment using project-based learning.

Raspberry Pi Certified Educator Chris Aviles Innovation Lab Oracle Weather Station

Raspberry Pi Oracle Weather Station

For our climate change challenge, we decided to build a Raspberry Pi Oracle Weather Station. Tackling the issues of climate change in a way that helps our community stood out to us because we knew with the help of this weather station we can send the local data to farmers and fishermen in town. Recent changes in climate have been affecting farmers’ crops. Unexpected rain, heat, and other unusual weather patterns can completely destabilize the natural growth of the plants and destroy their crops altogether. The amount of labour output needed by farmers has also significantly increased, forcing farmers to grow more food on less resources. By using our Raspberry Pi Oracle Weather Station to alert local farmers, they can be more prepared and aware of the weather, leading to better crops and safe boating.

Raspberry Pi Certified Educator Chris Aviles Innovation Lab Oracle Weather Station

Growing teamwork and coding skills

The process of setting up our weather station was fun and simple. Raspberry Pi made the instructions very easy to understand and read, which was very helpful for our team who had little experience in coding or physical computing. We enjoyed working together as a team and were happy to be growing our teamwork skills.

Once we constructed and coded the weather station, we learned that we needed to support the station with PVC pipes. After we completed these steps, we brought the weather station up to the roof of the school and began collecting data. Our information is currently being sent to the Initial State dashboard so that we can share the information with anyone interested. This information will also be recorded and seen by other schools, businesses, and others from around the world who are using the weather station. For example, we can see the weather in countries such as France, Greece and Italy.

Raspberry Pi Certified Educator Chris Aviles Innovation Lab Oracle Weather Station

Raspberry Pi allows us to build these amazing projects that help us to enjoy coding and physical computing in a fun, engaging, and impactful way. We picked climate change because we care about our community and would like to make a substantial contribution to our town, Fair Haven, New Jersey. It is not every day that kids are given these kinds of opportunities, and we are very lucky and grateful to go to a school and learn from a teacher where these opportunities are given to us. Thanks, Mr Aviles!

To see more awesome projects by Mr Avile’s class, you can keep up with him on his blog and follow him on Twitter.

The post Tackling climate change and helping the community appeared first on Raspberry Pi.

New – Machine Learning Inference at the Edge Using AWS Greengrass

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-machine-learning-inference-at-the-edge-using-aws-greengrass/

What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.

Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.

Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.

Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.

ML Inference at the Edge
Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.

Here are a few of the many ways that you can put Greengrass ML Inference to use:

Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.

Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.

Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.

Greengrass ML Inference Overview
There are several different aspects to this new AWS feature. Let’s take a look at each one:

Machine Learning ModelsPrecompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.

Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.

Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:

These new features are available now and you can start using them today! To learn more read Perform Machine Learning Inference.

Jeff;

 

Hacker House’s Zero W–powered automated gardener

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/hacker-house-automated-gardener/

Are the plants in your home or office looking somewhat neglected? Then build an automated gardener using a Raspberry Pi Zero W, with help from the team at Hacker House.

Make a Raspberry Pi Automated Gardener

See how we built it, including our materials, code, and supplemental instructions, on Hackster.io: https://www.hackster.io/hackerhouse/automated-indoor-gardener-a90907 With how busy our lives are, it’s sometimes easy to forget to pay a little attention to your thirsty indoor plants until it’s too late and you are left with a crusty pile of yellow carcasses.

Building an automated gardener

Tired of their plants looking a little too ‘crispy’, Hacker House have created an automated gardener using a Raspberry Pi Zero W alongside some 3D-printed parts, a 5v USB grow light, and a peristaltic pump.

Hacker House Automated Gardener Raspberry Pi

They designed and 3D printed a PLA casing for the project, allowing enough space within for the Raspberry Pi Zero W, the pump, and the added electronics including soldered wiring and two N-channel power MOSFETs. The MOSFETs serve to switch the light and the pump on and off.

Hacker House Automated Gardener Raspberry Pi

Due to the amount of power the light and pump need, the team replaced the Pi’s standard micro USB power supply with a 12v switching supply.

Coding an automated gardener

All the code for the project — a fairly basic Python script —is on the Hacker House GitHub repository. To fit it to your requirements, you may need to edit a few lines of the code, and Hacker House provides information on how to do this. You can also find more details of the build on the hackster.io project page.

Hacker House Automated Gardener Raspberry Pi

While the project runs with preset timings, there’s no reason why you couldn’t upgrade it to be app-based, for example to set a watering schedule when you’re away on holiday.

To see more for the Hacker House team, be sure to follow them on YouTube. You can also check out some of their previous Raspberry Pi projects featured on our blog, such as the smartphone-connected door lock and gesture-controlled holographic visualiser.

Raspberry Pi and your home garden

Raspberry Pis make great babysitters for your favourite plants, both inside and outside your home. Here at Pi Towers, we have Bert, our Slack- and Twitter-connected potted plant who reminds us when he’s thirsty and in need of water.

Bert Plant on Twitter

I’m good. There’s plenty to drink!

And outside of the office, we’ve seen plenty of your vegetation-focused projects using Raspberry Pi for planting, monitoring or, well, commenting on social and political events within the media.

If you use a Raspberry Pi within your home gardening projects, we’d love to see how you’ve done it. So be sure to share a link with us either in the comments below, or via our social media channels.

 

The post Hacker House’s Zero W–powered automated gardener appeared first on Raspberry Pi.

Jackpotting Attacks Against US ATMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/jackpotting_att.html

Brian Krebs is reporting sophisticated jackpotting attacks against US ATMs. The attacker gains physical access to the ATM, plants malware using specialized electronics, and then later returns and forces the machine to dispense all the cash it has inside.

The Secret Service alert explains that the attackers typically use an endoscope — a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body — to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

Lots of details in the article.

Skygofree: New Government Malware for Android

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/skygofree_new_g.html

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That’s not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn’t respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

ShadowBrokers Releases NSA UNITEDRAKE Manual

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/shadowbrokers_r.html

The ShadowBrokers released the manual for UNITEDRAKE, a sophisticated NSA Trojan that targets Windows machines:

Able to compromise Windows PCs running on XP, Windows Server 2003 and 2008, Vista, Windows 7 SP 1 and below, as well as Windows 8 and Windows Server 2012, the attack tool acts as a service to capture information.

UNITEDRAKE, described as a “fully extensible remote collection system designed for Windows targets,” also gives operators the opportunity to take complete control of a device.

The malware’s modules — including FOGGYBOTTOM and GROK — can perform tasks including listening in and monitoring communication, capturing keystrokes and both webcam and microphone usage, the impersonation users, stealing diagnostics information and self-destructing once tasks are completed.

More news.

UNITEDRAKE was mentioned in several Snowden documents and also in the TAO catalog of implants.

And Kaspersky Labs has found evidence of these tools in the wild, associated with the Equation Group — generally assumed to be the NSA:

The capabilities of several tools in the catalog identified by the codenames UNITEDRAKE, STRAITBAZZARE, VALIDATOR and SLICKERVICAR appear to match the tools Kaspersky found. These codenames don’t appear in the components from the Equation Group, but Kaspersky did find “UR” in EquationDrug, suggesting a possible connection to UNITEDRAKE (United Rake). Kaspersky also found other codenames in the components that aren’t in the NSA catalog but share the same naming conventions­they include SKYHOOKCHOW, STEALTHFIGHTER, DRINKPARSLEY, STRAITACID, LUTEUSOBSTOS, STRAITSHOOTER, and DESERTWINTER.

ShadowBrokers has only released the UNITEDRAKE manual, not the tool itself. Presumably they’re trying to sell that

Tomato-Plant Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/tomato-plant_se.html

I have a soft spot for interesting biological security measures, especially by plants. I’ve used them as examples in several of my books. Here’s a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It’s common for caterpillars to eat each other when they’re stressed out by the lack of food. (We’ve all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

Tweetponic lavender: nourishing nature with the Twitter API

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/tweetponic-lavender/

In a Manhattan gallery, there is an art installation that uses a Raspberry Pi to control the lights, nourishing an underground field of lavender. The twist: the Pi syncs the intensity of the lights to the activity of a dozen or so Twitter accounts belonging to media personalities and members of the US government.

In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets.

204 Likes, 5 Comments – Martin Roth (@martinroth02) on Instagram: “In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets.”

Turning tweets into cellulose

Artist Martin Roth has used the Raspberry Pi to access the accounts via the Twitter API, and to track their behaviour. This information is then relayed to the lights in real time. The more tweets, retweets, and likes there are on these accounts at a given moment, the brighter the lights become, and the better the lavender plants grow. Thus Twitter storms are converted into plant food, and ultimately into a pleasant lavender scent.

Until June 21st @ ACF (11 East 52nd Street)

39 Likes, 1 Comments – Martin Roth (@martinroth02) on Instagram: “Until June 21st @ ACF (11 East 52nd Street)”

Regarding his motivation to create the art installation, Martin Roth says:

[The] Twitter storm is something to be resisted. But I am using it in my exhibition as a force to create growth.

The piece, descriptively titled In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets, is on show at the Austrian Cultural Forum, New York.

Using the Twitter API as part of digital making

We’ve seen a number of cool makes using the Twitter API. These often involve the posting of tweets in response to real-world inputs. Some of our favourites are the tweeting cat flap Flappy McFlapface, the tweeting dog Oliver Twitch, and of course Pi Towers resident Bert the plant. It’s interesting to see the concept turned on its head.

If you feel inspired by these projects, head on over to our resource introducing the Twitter API using Python. Or do you already have a project, in progress or finished, that uses the API? Let us know about it in the comments!

The post Tweetponic lavender: nourishing nature with the Twitter API appeared first on Raspberry Pi.

Zelda-inspired ocarina-controlled home automation

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zelda-home-automation/

Allen Pan has wired up his home automation system to be controlled by memorable tunes from the classic Zelda franchise.

Zelda Ocarina Controlled Home Automation – Zelda: Ocarina of Time | Sufficiently Advanced

With Zelda: Breath of the Wild out on the Nintendo Switch, I made a home automation system based off the Zelda series using the ocarina from The Legend of Zelda: Ocarina of Time. Help Me Make More Awesome Stuff! https://www.patreon.com/sufficientlyadvanced Subscribe! http://goo.gl/xZvS5s Follow Sufficiently Advanced!

Listen!

Released in 1998, The Legend of Zelda: Ocarina of Time is the best game ever is still an iconic entry in the retro gaming history books.

Very few games have stuck with me in the same way Ocarina has, and I think it’s fair to say that, with the continued success of the Zelda franchise, I’m not the only one who has a special place in their heart for Link, particularly in this musical outing.

Legend of Zelda: Ocarina of Time screenshot

Thanks to Cynosure Gaming‘s Ocarina of Time review for the image.

Allen, or Sufficiently Advanced, as his YouTube subscribers know him, has used a Raspberry Pi to detect and recognise key tunes from the game, with each tune being linked (geddit?) to a specific task. By playing Zelda’s Lullaby (E, G, D, E, G, D), for instance, Allen can lock or unlock the door to his house. Other tunes have different functions: Epona’s Song unlocks the car (for Ocarina noobs, Epona is Link’s horse sidekick throughout most of the game), and Minuet of Forest waters the plants.

So how does it work?

It’s a fairly simple setup based around note recognition. When certain notes are played in a specific sequence, the Raspberry Pi detects the tune via a microphone within the Amazon Echo-inspired body of the build, and triggers the action related to the specific task. The small speaker you can see in the video plays a confirmation tune, again taken from the video game, to show that the task has been completed.

Legend of Zelda Ocarina of Time Raspberry Pi Home Automation system setup image

As for the tasks themselves, Allen has built a small controller for each action, whether it be a piece of wood that presses down on his car key, a servomotor that adjusts the ambient temperature, or a water pump to hydrate his plants. Each controller has its own small ESP8266 wireless connectivity module that links back to the wireless-enabled Raspberry Pi, cutting down on the need for a ton of wires about the home.

And yes, before anybody says it, we’re sure that Allen is aware that using tone recognition is not the safest means of locking and unlocking your home. This is just for fun.

Do-it-yourself home automation

While we don’t necessarily expect everyone to brush up on their ocarina skills and build their own Zelda-inspired home automation system, the idea of using something other than voice or text commands to control home appliances is a fun one.

You could use facial recognition at the door to start the kettle boiling, or the detection of certain gasses to – ahem!– spray an air freshener.

We love to see what you all get up to with the Raspberry Pi. Have you built your own home automation system controlled by something other than your voice? Share it in the comments below.

 

The post Zelda-inspired ocarina-controlled home automation appeared first on Raspberry Pi.

The CIA’s "Development Tradecraft DOs and DON’Ts"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_cias_develo.html

Useful best practices for malware writers, courtesy of the CIA. Seems like a lot of good advice.

General:

  • DO obfuscate or encrypt all strings and configuration data that directly relate to tool functionality. Consideration should be made to also only de-obfuscating strings in-memory at the moment the data is needed. When a previously de-obfuscated value is no longer needed, it should be wiped from memory.

    Rationale: String data and/or configuration data is very useful to analysts and reverse-engineers.

  • DO NOT decrypt or de-obfuscate all string data or configuration data immediately upon execution.

    Rationale: Raises the difficulty for automated dynamic analysis of the binary to find sensitive data.

  • DO explicitly remove sensitive data (encryption keys, raw collection data, shellcode, uploaded modules, etc) from memory as soon as the data is no longer needed in plain-text form. DO NOT RELY ON THE OPERATING SYSTEM TO DO THIS UPON TERMINATION OF EXECUTION.

    Rationale: Raises the difficulty for incident response and forensics review.

  • DO utilize a deployment-time unique key for obfuscation/de-obfuscation of sensitive strings and configuration data.

    Rationale: Raises the difficulty of analysis of multiple deployments of the same tool.

  • DO strip all debug symbol information, manifests(MSVC artifact), build paths, developer usernames from the final build of a binary.

    Rationale: Raises the difficulty for analysis and reverse-engineering, and removes artifacts used for attribution/origination.

  • DO strip all debugging output (e.g. calls to printf(), OutputDebugString(), etc) from the final build of a tool.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT explicitly import/call functions that is not consistent with a tool’s overt functionality (i.e. WriteProcessMemory, VirtualAlloc, CreateRemoteThread, etc – for binary that is supposed to be a notepad replacement).

    Rationale: Lowers potential scrutiny of binary and slightly raises the difficulty for static analysis and reverse-engineering.

  • DO NOT export sensitive function names; if having exports are required for the binary, utilize an ordinal or a benign function name.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT generate crashdump files, coredump files, “Blue” screens, Dr Watson or other dialog pop-ups and/or other artifacts in the event of a program crash. DO attempt to force a program crash during unit testing in order to properly verify this.

    Rationale: Avoids suspicion by the end user and system admins, and raises the difficulty for incident response and reverse-engineering.

  • DO NOT perform operations that will cause the target computer to be unresponsive to the user (e.g. CPU spikes, screen flashes, screen “freezing”, etc).

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO make all reasonable efforts to minimize binary file size for all binaries that will be uploaded to a remote target (without the use of packers or compression). Ideal binary file sizes should be under 150KB for a fully featured tool.

    Rationale: Shortens overall “time on air” not only to get the tool on target, but to time to execute functionality and clean-up.

  • DO provide a means to completely “uninstall”/”remove” implants, function hooks, injected threads, dropped files, registry keys, services, forked processes, etc whenever possible. Explicitly document (even if the documentation is “There is no uninstall for this “) the procedures, permissions required and side effects of removal.

    Rationale: Avoids unwanted data left on target. Also, proper documentation allows operators to make better operational risk assessment and fully understand the implications of using a tool or specific feature of a tool.

  • DO NOT leave dates/times such as compile timestamps, linker timestamps, build times, access times, etc. that correlate to general US core working hours (i.e. 8am-6pm Eastern time)

    Rationale: Avoids direct correlation to origination in the United States.

  • DO NOT leave data in a binary file that demonstrates CIA, USG, or its witting partner companies involvement in the creation or use of the binary/tool.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have data that contains CIA and USG cover terms, compartments, operation code names or other CIA and USG specific terminology in the binary.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have “dirty words” (see dirty word list – TBD) in the binary.

    Rationale: Dirty words, such as hacker terms, may cause unwarranted scrutiny of the binary file in question.

Networking:

  • DO use end-to-end encryption for all network communications. NEVER use networking protocols which break the end-to-end principle with respect to encryption of payloads.

    Rationale: Stifles network traffic analysis and avoids exposing operational/collection data.

  • DO NOT solely rely on SSL/TLS to secure data in transit.

    Rationale: Numerous man-in-middle attack vectors and publicly disclosed flaws in the protocol.

  • DO NOT allow network traffic, such as C2 packets, to be re-playable.

    Rationale: Protects the integrity of operational equities.

  • DO use ITEF RFC compliant network protocols as a blending layer. The actual data, which must be encrypted in transit across the network, should be tunneled through a well known and standardized protocol (e.g. HTTPS)

    Rationale: Custom protocols can stand-out to network analysts and IDS filters.

  • DO NOT break compliance of an RFC protocol that is being used as a blending layer. (i.e. Wireshark should not flag the traffic as being broken or mangled)

    Rationale: Broken network protocols can easily stand-out in IDS filters and network analysis.

  • DO use variable size and timing (aka jitter) of beacons/network communications. DO NOT predicatively send packets with a fixed size and timing.

    Rationale: Raises the difficulty of network analysis and correlation of network activity.

  • DO proper cleanup of network connections. DO NOT leave around stale network connections.

    Rationale: Raises the difficulty of network analysis and incident response.

Disk I/O:

  • DO explicitly document the “disk forensic footprint” that could be potentially created by various features of a binary/tool on a remote target.

    Rationale: Enables better operational risk assessments with knowledge of potential file system forensic artifacts.

  • DO NOT read, write and/or cache data to disk unnecessarily. Be cognizant of 3rd party code that may implicitly write/cache data to disk.

    Rationale: Lowers potential for forensic artifacts and potential signatures.

  • DO NOT write plain-text collection data to disk.

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO encrypt all data written to disk.

    Rationale: Disguises intent of file (collection, sensitive code, etc) and raises difficulty of forensic analysis and incident response.

  • DO utilize a secure erase when removing a file from disk that wipes at a minimum the file’s filename, datetime stamps (create, modify and access) and its content. (Note: The definition of “secure erase” varies from filesystem to filesystem, but at least a single pass of zeros of the data should be performed. The emphasis here is on removing all filesystem artifacts that could be useful during forensic analysis)

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO NOT perform Disk I/O operations that will cause the system to become unresponsive to the user or alerting to a System Administrator.

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO NOT use a “magic header/footer” for encrypted files written to disk. All encrypted files should be completely opaque data files.

    Rationale: Avoids signature of custom file format’s magic values.

  • DO NOT use hard-coded filenames or filepaths when writing files to disk. This must be configurable at deployment time by the operator.

    Rationale: Allows operator to choose the proper filename that fits with in the operational target.

  • DO have a configurable maximum size limit and/or output file count for writing encrypted output files.

    Rationale: Avoids situations where a collection task can get out of control and fills the target’s disk; which will draw unwanted attention to the tool and/or the operation.

Dates/Time:

  • DO use GMT/UTC/Zulu as the time zone when comparing date/time.

    Rationale: Provides consistent behavior and helps ensure “triggers/beacons/etc” fire when expected.

  • DO NOT use US-centric timestamp formats such as MM-DD-YYYY. YYYYMMDD is generally preferred.

    Rationale: Maintains consistency across tools, and avoids associations with the United States.

PSP/AV:

  • DO NOT assume a “free” PSP product is the same as a “retail” copy. Test on all SKUs where possible.

    Rationale: While the PSP/AV product may come from the same vendor and appear to have the same features despite having different SKUs, they are not. Test on all SKUs where possible.

  • DO test PSPs with live (or recently live) internet connection where possible. NOTE: This can be a risk vs gain balance that requires careful consideration and should not be haphazardly done with in-development software. It is well known that PSP/AV products with a live internet connection can and do upload samples software based varying criteria.

    Rationale: PSP/AV products exhibit significant differences in behavior and detection when connected to the internet vise not.

Encryption: NOD publishes a Cryptography standard: “NOD Cryptographic Requirements v1.1 TOP SECRET.pdf“. Besides the guidance provided here, the requirements in that document should also be met.

The crypto requirements are complex and interesting. I’ll save commenting on them for another post.

News article.

Class Breaks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/class_breaks.html

There’s a concept from computer security known as a class break. It’s a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It’s a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn’t have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone­ — or to lots of people — ­without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn’t guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely — ­they could open every door lock remotely at the same time. That’s a class break.

It’s how computer systems fail, but it’s not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don’t think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don’t think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It’s the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn’t happen at all.

But there’s a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don’t change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they’ll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle­ — where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security — will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It’s not an inherently less secure world, but it’s a differently secure world. It’s a world where driverless cars are much safer than people-driven cars, until suddenly they’re not. We need to build systems that assume the possibility of class breaks — and maintain security despite them.

This essay originally appeared on Edge.org as part of their annual question. This year it was: “What scientific term or concept ought to be more widely known?

Regulation of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Pi-powered Wonder Pop Controller

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-powered-wonder-pop-controller/

Let me start by saying that I am responsible for the title of this blog post. The build’s creator, the wonderful Nicole He, didn’t correct me on this when I contacted her, sooooo…

Anyway, this is one project that caused the residents of Pi Towers (mainly Liz and me) to stare at it open-mouthed, praying that the build used a Raspberry Pi. This happens sometimes. We see something awesome that definitely uses a Raspberry Pi, Arduino or similar and we cross fingers and toes in the hope it’s the former. This was one of those cases, and a quick Instagram comment brought us the answer we’d hoped for.

Lollipop gif

I’ve shared Nicole’s work on our social channels in the past. A few months back, I came across her Grow Slow project tutorial, which became an instant hit both with our followers and across other social accounts and businesses. Grow Slow uses a Raspberry Pi and webcam to tweet a daily photo of her plant. The tutorial is a great starter for those new to coding and the Pi, another reason why it did so well across social media.

But we’re not here to talk about plants and Twitter. We’re here to talk about the Pi-powered Wonder Pop Controller; a project brought to our attention via a retweet, causing instant drooling.

Lickable Controller

The controller uses a Raspberry Pi, the Adafruit Capacitive Touch HAT, and copper foil tape to create a networked controller.

“I made it for a class about networks; the idea is that we make a physical controller that can connect to a game played over a TCP socket.”

Now, I’m sure someone will argue that it’s not the licking of the lollipop that creates the connection, but rather the licking of the copper tape. And yes, you’re right. But where’s the fun in a project titled ‘Pi-powered Lickable Copper Tape Controller‘? Exactly.

lickable lollipop

The idea behind this project is a nice starting block for using capacitive touch for video games controllers. While we figure out our creations, share with us any interesting controllers you’ve made.

… or make one this weekend and share it on Monday. I can wait.

*Continues to play with the sun on Nicole’s website instead of doing any work*

The post Pi-powered Wonder Pop Controller appeared first on Raspberry Pi.

Detecting landmines – with spinach

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/detecting-landmines-with-spinach/

Forget sniffer dogs…we need to talk about spinach.

The team at MIT (Massachusetts Institute of Technology) have been working to transform spinach plants into a means of detection in the fight against buried munitions such as landmines.

Plant-to-human communication

MIT engineers have transformed spinach plants into sensors that can detect explosives and wirelessly relay that information to a handheld device similar to a smartphone. (Learn more: http://news.mit.edu/2016/nanobionic-spinach-plants-detect-explosives-1031) Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1 The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts.

Nanoparticles, plus tiny tubes called carbon nanotubes, are embedded into the spinach leaves where they pick up nitro-aromatics, chemicals found in the hidden munitions.

It takes the spinach approximately ten minutes to absorb water from the ground, including the nitro-aromatics, which then bind to the polymer material wrapped around the nanotube.

But where does the Pi come into this?

The MIT team shine a laser onto the leaves, detecting the altered fluorescence of the light emitted by the newly bonded tubes. This light is then read by a Raspberry Pi fitted with an infrared camera, resulting in a precise map of where hidden landmines are located. This signal can currently be picked up within a one-mile radius, with plans to increase the reach in future.

detecting landmines with spinach

You can also physically hack a smartphone to replace the Raspberry Pi… but why would you want to do that?

The team at MIT have already used the tech to detect hydrogen peroxide, TNT, and sarin, while co-author Prof. Michael Strano advises that the same setup can be used to detect “virtually anything”.

“The plants could be use for defence applications, but also to monitor public spaces for terrorism-related activities, since we show both water and airborne detection”

More information on the paper can be found at the MIT website.

The post Detecting landmines – with spinach appeared first on Raspberry Pi.

WTF Yahoo/FISA search in kernel?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/wtf-yahoofisa-search-in-kernel.html

A surprising detail in the Yahoo/FISA email search scandal is that they do it with a kernel module. I thought I’d write up some (rambling) notes.

What the government was searching for

As described in the previoius blog post, we’ll assume the government is searching for the following string, and possibly other strings like it within emails:

### Begin ASRAR El Mojahedeen v2.0 Encrypted Message ###

I point this out because it’s simple search identifying things. It’s not natural language processing. It’s not searching for phrases like “bomb president”.

Also, it’s not AV/spam/childporn processing. Those look at different things. For example, filtering message containing childporn involves calculating a SHA2 hash of email attachments and looking up the hashes in a table of known bad content (or even more in-depth analysis). This is quite different from searching.

The Kernel vs. User Space

Operating systems have two parts, the kernel and user space. The kernel is the operating system proper (e.g. the “Linux kernel”). The software we run is in user space, such as browsers, word processors, games, web servers, databases, GNU utilities [sic], and so on.

The kernel has raw access to the machine, memory, network devices, graphics cards, and so on. User space has virtual access to these things. The user space is the original “virtual machines”, before kernels got so bloated that we needed a third layer to virtualize them too.

This separation between kernel and user has two main benefits. The first is security, controlling which bit of software has access to what. It means, for example, that one user on the machine can’t access another’s files. The second benefit is stability: if one program crashes, the others continue to run unaffected.

Downside of a Kernel Module

Writing a search program as a kernel module (instead of a user space module) defeats the benefits of user space programs, making the machine less stable and less secure.

Moreover, the sort of thing this module does (parsing emails) has a history of big gapping security flaws. Parsing stuff in the kernel makes cybersecurity experts run away screaming in terror.

On the other hand, people have been doing security stuff (SSL implementations and anti-virus scanning) in the kernel in other situations, so it’s not unprecedented. I mean, it’s still wrong, but it’s been done before.

Upside of a Kernel Module

If doing this is as a kernel module (instead of in user space) is so bad, then why does Yahoo do it? It’s probably due to the widely held, but false, belief that putting stuff in the kernel makes it faster.

Everybody knows that kernels are faster, for two reasons. First is that as a program runs, making a system call switches context, from running in user space to running in kernel space. This step is expensive/slow. Kernel modules don’t incur this expense, because code just jumps from one location in the kernel to another. The second performance issue is virtual memory, where reading memory requires an extra step in user space, to translate the virtual memory address to a physical one. Kernel modules access physical memory directly, without this extra step.

But everyone is wrong. Using features like hugepages gets rid of the cost of virtual memory translation cost. There are ways to mitigate the cost of user/kernel transitions, such as moving data in bulk instead of a little bit at a time. Also, CPUs have improved in recent years, dramatically reducing the cost of a kernel/user transition.

The problem we face, though, is inertia. Everyone knows moving modules into the kernel makes things faster. It’s hard getting them to un-learn what they’ve been taught.

Also, following this logic, Yahoo may already have many email handling functions in the kernel. If they’ve already gone down the route of bad design, then they’d have to do this email search as a kernel module as well, to avoid the user/kernel transition cost.

Another possible reason for the kernel-module is that it’s what the programmers knew how to do. That’s especially true if the contractor has experience with other kernel software, such as NSA implants. They might’ve read Phrack magazine on the topic, which might have been their sole education on the subject. [http://phrack.org/issues/61/13.html]

How it was probably done

I don’t know Yahoo’s infrastructure. Presumably they have front-end systems designed to balance the load (and accelerate SSL processing), and back-end systems that do the heavy processing, such as spam and virus checking.

The typical way to do this sort of thing (search) is simply tap into the network traffic, either as a separate computer sniffing (eavesdropping on) the network, or something within the system that taps into the network traffic, such as a netfilter module. Netfilter is the Linux firewall mechanism, and has ways to easily “hook” into specific traffic, either from user space or from a kernel module. There is also a related user space mechanism of hooking network APIs like recv() with a preload shared library.

This traditional mechanism doesn’t work as well anymore. For one thing, incoming email traffic is likely encrypted using SSL (using STARTTLS, for example). For another thing, companies are increasingly encrypting intra-data-center traffic, either with SSL or with hard-coded keys.

Therefore, instead of tapping into network traffic, the code might tap directly into the mail handling software. A good example of this is Sendmail’s milter interface, that allows the easy creation of third-party mail filtering applications, specifically for spam and anti-virus.

But it would be insane to write a milter as a kernel module, since mail handling is done in user space, thus adding unnecessary user/kernel transitions. Consequently, we make the assumption that Yahoo’s intra-data-center traffic in unencrypted, and that for FISA search thing, they wrote something like a kernel-module with netfilter hooks.

How it should’ve been done

Assuming the above guess is correct, that they used kernel netfilter hooks, there are a few alternatives.

They could do user space netfilter hooks instead, but they do have a performance impact. They require a transition from the kernel to user, then a second transition back into the kernel. If the system is designed for high performance, this might be a noticeable performance impact. I doubt it, as it’s still small compared to the rest of the computations involved, but it’s the sort of thing that engineers are prejudiced against, even before they measure the performance impact.

A better way of doing it is hooking the libraries. These days, most software uses shared libraries (.so) to make system calls like recv(). You can write your own shared library, and preload it. When the library function is called, you do your own processing, then call the original function.

Hooking the libraries then lets you tap into the network traffic, but without any additional kernel/user transition.

Yet another way is simple changes in the mail handling software that allows custom hooks to be written.

Third party contractors

We’ve been thinking in terms of technical solutions. There is also the problem of politics.

Almost certainly, the solution was developed by outsiders, by defense contractors like Booz-Allen. (I point them out because of the whole Snowden/Martin thing). This restricts your technical options.

You don’t want to give contractors access to your source code. Nor do you want to the contractors to be making custom changes to your source code, such as adding hooks. Therefore, you are looking at external changes, such as hooking the network stack.

The advantage of a netfilter hook in the kernel is that it has the least additional impact on the system. It can be developed and thoroughly tested by Booz-Allen, then delivered to Yahoo!, who can then install it with little effort.

This is my #1 guess why this was a kernel module – it allowed the most separation between Yahoo! and a defense contractor who wrote it. In other words, there is no technical reason for it — but a political reason.

Let’s talk search

There two ways to search things: using an NFA and using a DFA.

An NFA is the normal way of using regex, or grep. It allows complex patterns to be written, but it requires a potentially large amount of CPU power (i.e. it’s slow). It also requires backtracking within a message, thus meaning the entire email must be reassembled before searching can begin.

The DFA alternative instead creates a large table in memory, then does a single pass over a message to search. Because it does only a single pass, without backtracking, the message can be streamed through the search module, without needing to reassemble the message. In theory, anything searched by an NFA can be searched by a DFA, though in practice some unbounded regex expressions require too much memory, so DFAs usually require simpler patterns.

The DFA approach, by the way, is about 4-gbps per 2.x-GHz Intel x86 server CPU. Because no reassembly is required, it can tap directly into anything above the TCP stack, like netfilter. Or, it can tap below the TCP stack (like libpcap), but would require some logic to re-order/de-duplicate TCP packets, to present the same ordered stream as TCP.

DFAs would therefore require little or no memory. In contrast, the NFA approach will require more CPU and memory just to reassemble email messages, and the search itself would also be slower.

The naïve approach to searching is to use NFAs. It’s what most people start out with. The smart approach is to use DFAs. You see that in the evolution of the Snort intrusion detection engine, where they started out using complex NFAs and then over the years switched to the faster DFAs.

You also see it in the network processor market. These are specialized CPUs designed for things like firewalls. They advertise fast regex acceleration, but what they really do is just convert NFAs into something that is mostly a DFA, which you can do on any processor anyway. I have a low opinion of network processors, since what they accelerate are bad decisions. Correctly designed network applications don’t need any special acceleration, except maybe SSL public-key crypto.

So, what the government’s code needs to do is a very lightweight parse of the SMTP protocol in order to extract the from/to email addresses, then a very lightweight search of the message’s content in order to detect if any of the offending strings have been found. When the pattern is found, it then reports the addresses it found.

Conclusion

I don’t know Yahoo’s system for processing incoming emails. I don’t know the contents of the court order forcing them to do a search, and what needs to be secret. Therefore, I’m only making guesses here.

But they are educated guesses. In 9 times out of 10 in situations similar to Yahoo, I’m guessing that a “kernel module” would be the most natural solution. It’s how engineers are trained to think, and it would likely be the best fit organizationally. Sure, it really REALLY annoys cybersecurity experts, but nobody cares what we think, so that doesn’t matter.

element14 Pi IoT Smarter Spaces Design Challenge

Post Syndicated from Roger Thornton original https://www.raspberrypi.org/blog/element14-pi-iot-smarter-spaces-design-challenge/

Earlier this year, I was asked to be a judge for the element14 Pi IoT Smarter Spaces Design Challenge. It has been fantastic to be involved in a contest where so many brilliant ideas were developed.

The purpose of the competition was to get designers to use a kit of components that included Raspberry Pi, various accessories, and EnOcean products, to take control of the spaces they are in. Spaces could be at home, at work, outdoors, or any other space the designer could think of.

Graphic showing a figure reflected in a mirror as they select breakfast from a menu displayed on its touchscreen surface

Each entrant provided an initial outline of what they wanted to achieve, after which they were given three months to design, build, and implement their system. All the designers have detailed their work fantastically on the element14 website, and if you’re looking for inspiration for your next project I would recommend you read through the entries to this challenge. It has been excellent to see such a great breadth of projects undertaken, all of which had a unique perspective on what “space” was and how it needed to be controlled.

3rd place

Gerrit Polder developed his Plant Health Camera. Gerrit’s project was fantastic, combining regular and NoIR Raspberry Pi Camera Modules with some very interesting software to monitor plant health in real time.

Pi IoT Plant Health Camera Summary

Element14 Pi IoT challenge Plant Health Camera Summary. For info about this project, visit: https://www.element14.com/community/community/design-challenges/pi-iot/blog/2016/08/29/pi-iot-plant-health-camera-11-summary

2nd place

Robin Eggenkamp created a system called Thuis – that’s Danish for “at home”, and is pronounced “thoosh”! Robin presented a comprehensive smart home system that connects to a variety of sensors and features in his home, including a keyless door lock and remote lighting control, and incorporates mood lighting and a home cinema system. He also produced a great video of the system in action.

Thuis app demo

Final demo of the Thuis app

1st place

Overall winner Frederick Vandenbosch constructed his Pi IoT Alarm Clock. Frederick produced a truly impressive set of devices which look fantastic and enable a raft of smart home technologies. The devices used in the system range from IP cameras, to energy monitors that can be dotted around the home, to a small bespoke unit that keeps track of house keys. These are controlled from well-designed hubs: an interactive one that includes a display and keypad, as well as the voice-activated alarm clock. The whole system comes together to provide a truly smart space, and I’d recommend reading Frederick’s blog to find out more.

My entry for element14’s PiIoT Design Challenge

This is my demonstration video for element14’s Pi IoT Design Challenge, sponsored by Duratool and EnOcean, in association with Raspberry Pi. Have feedback on this project? Ideas for another? Let me know in the comments!

Thanks to each and every designer in this competition, and to all the people in the element14 community who have helped make this a great competition to be part of. If you’re interested in taking part in a future design challenge run by element14, they are run regularly with some great topics – and the prizes aren’t bad, either.

I urge everyone to keep on designing, building, experimenting, and creating!

Pi IoT Smarter Spaces Design Challenges Winners Announcement

No Description

 

The post element14 Pi IoT Smarter Spaces Design Challenge appeared first on Raspberry Pi.

New – Auto Scaling for EC2 Spot Fleets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-ec2-spot-fleets/

The EC2 Spot Fleet model (see Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with one Request for more information) allows you to create a fleet of EC2 instances with a single request. You simply specify the fleet’s target capacity, enter a bid price per hour, and choose the instance types that you would like to have as part of your fleet.

Behind the scenes, AWS will maintain the desired target capacity (expressed in terms of instances or a vCPU count) by launching Spot instances that result in the best prices for you. Over time, as instances in the fleet are terminated due to rising prices, replacement instances will be launched using the specifications that result in the lowest price at that point in time.

New Auto Scaling
Today we are enhancing the Spot Fleet model with the addition of Auto Scaling. You can now arrange to scale your fleet up and down based on a Amazon CloudWatch metric. The metric can originate from an AWS service such as EC2, Amazon EC2 Container Service, or Amazon Simple Queue Service (SQS). Alternatively, your application can publish a custom metric and you can use it to drive the automated scaling. Either way, using these metrics to control the size of your fleet gives you very fine-grained control over application availability, performance, and cost even as conditions and loads change. Here are some ideas to get you started:

  • Containers – Scale container-based applications running on Amazon ECS using CPU or memory usage metrics.
  • Batch Jobs – Scale queue-driven batch jobs based on the number of messages in an SQS queue.
  • Spot Fleets – Scale a fleet based on Spot Fleet metrics such as MaxPercentCapacityAllocation.
  • Web Service – Scale web services based on measured response time and average requests per second.

You can set up Auto Scaling using the Spot Fleet Console, the AWS Command Line Interface (CLI), AWS CloudFormation, or by making API calls using one of the AWS SDKs.

I started by launching a fleet. I used the request type Request and maintain in order to be able to scale the fleet up and down:

My fleet was up and running within a minute or so:

Then (for illustrative purposes) I created an SQS queue, put some messages in it, and defined a CloudWatch alarm (AppQueueBackingUp) that would fire if there were 10 or more messages visible in the queue:

I also defined an alarm (AppQueueNearlyEmpty) that would fire if the queue was just about empty (2 messages or less).

Finally, I attached the alarms to the ScaleUp and ScaleDown policies for my fleet:

Before I started writing this post, I put 5 messages into the SQS queue. With the fleet launched and the scaling policies in place, I added 5 more, and then waited for the alarm to fire:

Then I checked in on my fleet, and saw that the capacity had been increased as expected. This was visible in the History tab (“New targetCapacity: 5”):

To wrap things up I purged all of the messages from my queue, watered my plants, and returned to find that my fleet had been scaled down as expected (“New targetCapacity: 2”):

Available Now
This new feature is available now and you can start using it today in all regions where Spot instances are supported.


Jeff;

 

EQGRP tools are post-exploitation

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/eqgrp-tools-are-post-exploitation.html

A recent leak exposed hackings tools from the “Equation Group”, a group likely related to the NSA TAO (the NSA/DoD hacking group). I thought I’d write up some comments.

Despite the existence of 0days, these tools seem to be overwhelmingly post-exploitation. They aren’t the sorts of tools you use to break into a network — but the sorts of tools you use afterwards.

The focus of the tools appear to be about hacking into network equipment, installing implants, achievement permanence, and using the equipment to sniff network traffic.

Different pentesters have different ways of doing things once they’ve gotten inside a network, and this is reflected in their toolkits. Some focus on Windows and getting domain admin control, and have tools like mimikatz. Other’s focus on webapps, and how to install hostile PHP scripts. In this case, these tools reflect a methodology that goes after network equipment.

It’s a good strategy. Finding equipment is easy, and undetectable, just do a traceroute. As long as network equipment isn’t causing problems, sysadmins ignore it, so your implants are unlikely to be detected. Internal network equipment is rarely patched, so old exploits are still likely to work. Some tools appear to target bugs in equipment that are likely older than Equation Group itself.

In particular, because network equipment is at the network center instead of the edges, you can reach out and sniff packets through the equipment. Half the time it’s a feature of the network equipment, so no special implant is needed. Conversely, when on the edge of the network, switches often prevent you from sniffing packets, and even if you exploit the switch (e.g. ARP flood), all you get are nearby machines. Getting critical machines from across the network requires remotely hacking network devices.

So you see a group of pentest-type people (TAO hackers) with a consistent methodology, and toolmakers who develop and refine tools for them. Tool development is a rare thing amount pentesters — they use tools, they don’t develop them. Having programmers on staff dramatically changes the nature of pentesting.

Consider the program xml2pcap. I don’t know what it does, but it looks like similar tools I’ve written in my own pentests. Various network devices will allow you to sniff packets, but produce output in custom formats. Therefore, you need to write a quick-and-dirty tool that converts from that weird format back into the standard pcap format for use with tools like Wireshark. More than once I’ve had to convert HTML/XML output to pcap. Setting port filters for 21 (FTP) and Telnet (23) produces low-bandwidth traffic with high return (admin passwords) within networks — all you need is a script that can convert the packets into standard format to exploit this.

Also consider the tftpd tool in the dump. Many network devices support that protocol for updating firmware and configuration. That’s pretty much all it’s used for. This points to a defensive security strategy for your organization: log all TFTP traffic.

Same applies to SNMP. By the way, SNMP vulnerabilities in network equipment is still low hanging fruit. SNMP stores thousands of configuration parameters and statistics in a big tree, meaning that it has an enormous attack surface. Anything value that’s a settable, variable-length value (OCTECT STRING, OBJECT IDENTIFIER) is something you can play with for buffer-overflows and format string bugs. The Cisco 0day in the toolkit was one example.

Some have pointed out that the code in the tools is crappy, and they make obvious crypto errors (such as using the same initialization vectors). This is nonsense. It’s largely pentesters, not software developers, creating these tools. And they have limited threat models — encryption is to avoid easy detection that they are exfiltrating data, not to prevent somebody from looking at the data.

From that perspective, then, this is fine code, with some effort spent at quality for tools that don’t particularly need it. I’m a professional coder, and my little scripts often suck worse than the code I see here.

Lastly, I don’t think it’s a hack of the NSA themselves. Those people are over-the-top paranoid about opsec. But 95% of the US cyber-industrial-complex is made of up companies, who are much more lax about security than the NSA itself. It’s probably one of those companies that got popped — such as an employee who went to DEFCON and accidentally left his notebook computer open on the hotel WiFi.

Conclusion

Despite the 0days, these appear to be post-exploitation tools. They look like the sort of tools pentesters might develop over years, where each time they pop a target, they do a little development based on the devices they find inside that new network in order to compromise more machines/data.