Tag Archives: Africa

AIMS Desktop 2017.1 released

Post Syndicated from corbet original https://lwn.net/Articles/725712/rss

The AIMS desktop is a
Debian-derived distribution aimed at mathematical and scientific use. This
project’s first public release, based on Debian 9, is now available.
It is a GNOME-based distribution with a bunch of add-on software.
It is maintained by AIMS (The African Institute for Mathematical
Sciences), a pan-African network of centres of excellence enabling Africa’s
talented students to become innovators driving the continent’s scientific,
educational and economic self-sufficiency.

AWS Big Data Blog Month in Review: April 2017

Post Syndicated from Derek Young original https://aws.amazon.com/blogs/big-data/aws-big-data-blog-month-in-review-april-2017/

Another month of big data solutions on the Big Data Blog. Please take a look at our summaries below and learn, comment, and share. Thank you for reading!

NEW POSTS

Amazon QuickSight Spring Announcement: KPI Charts, Export to CSV, AD Connector, and More! 
In this blog post, we share a number of new features and enhancements in Amazon Quicksight. You can now create key performance indicator (KPI) charts, define custom ranges when importing Microsoft Excel spreadsheets, export data to comma separated value (CSV) format, and create aggregate filters for SPICE data sets. In the Enterprise Edition, we added an additional option to connect to your on-premises Active Directory using AD Connector. 

Securely Analyze Data from Another AWS Account with EMRFS
Sometimes, data to be analyzed is spread across buckets owned by different accounts. In order to ensure data security, appropriate credentials management needs to be in place. This is especially true for large enterprises storing data in different Amazon S3 buckets for different departments. This post shows how you can use a custom credentials provider to access S3 objects that cannot be accessed by the default credentials provider of EMRFS.

Querying OpenStreetMap with Amazon Athena
This post explains how anyone can use Amazon Athena to quickly query publicly available OSM data stored in Amazon S3 (updated weekly) as an AWS Public Dataset. Imagine that you work for an NGO interested in improving knowledge of and access to health centers in Africa. You might want to know what’s already been mapped, to facilitate the production of maps of surrounding villages, and to determine where infrastructure investments are likely to be most effective.

Build a Real-time Stream Processing Pipeline with Apache Flink on AWS
This post outlines a reference architecture for a consistent, scalable, and reliable stream processing pipeline that is based on Apache Flink using Amazon EMR, Amazon Kinesis, and Amazon Elasticsearch Service. An AWSLabs GitHub repository provides the artifacts that are required to explore the reference architecture in action. Resources include a producer application that ingests sample data into an Amazon Kinesis stream and a Flink program that analyses the data in real time and sends the result to Amazon ES for visualization.

Manage Query Workloads with Query Monitoring Rules in Amazon Redshift
Amazon Redshift is a powerful, fully managed data warehouse that can offer significantly increased performance and lower cost in the cloud. However, queries which hog cluster resources (rogue queries) can affect your experience. In this post, you learn how query monitoring rules can help spot and act against such queries. This, in turn, can help you to perform smooth business operations in supporting mixed workloads to maximize cluster performance and throughput.

Amazon QuickSight Now Supports Audit Logging with AWS CloudTrail
In this post, we announce support for AWS CloudTrail in Amazon QuickSight, which allows logging of QuickSight events across an AWS account. Whether you have an enterprise setting or a small team scenario, this integration will allow QuickSight administrators to accurately answer questions such as who last changed an analysis, or who has connected to sensitive data. With CloudTrail, administrators have better governance, auditing and risk management of their QuickSight usage.

Near Zero Downtime Migration from MySQL to DynamoDB
This post introduces two methods of seamlessly migrating data from MySQL to DynamoDB, minimizing downtime and converting the MySQL key design into one more suitable for NoSQL.


Want to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

Leave a comment below to let us know what big data topics you’d like to see next on the AWS Big Data Blog.

250,000 Pi Zero W units shipped and more Pi Zero distributors announced

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/pi-zero-distributors-annoucement/

This week, just nine weeks after its launch, we will ship the 250,000th Pi Zero W into the market. As well as hitting that pretty impressive milestone, today we are announcing 13 new Raspberry Pi Zero distributors, so you should find it much easier to get hold of a unit.

Raspberry Pi Zero W and Case - Pi Zero distributors

This significantly extends the reach we can achieve with Pi Zero and Pi Zero W across the globe. These new distributors serve Australia and New Zealand, Italy, Malaysia, Japan, South Africa, Poland, Greece, Switzerland, Denmark, Sweden, Norway, and Finland. We are also further strengthening our network in the USA, Canada, and Germany, where demand continues to be very high.

Pi Zero W - Pi Zero distributors

A common theme on the Raspberry Pi forums has been the difficulty of obtaining a Zero or Zero W in a number of countries. This has been most notable in the markets which are furthest away from Europe or North America. We are hoping that adding these new distributors will make it much easier for Pi-fans across the world to get hold of their favourite tiny computer.

We know there are still more markets to cover, and we are continuing to work with other potential partners to improve the Pi Zero reach. Watch this space for even further developments!

Who are the new Pi Zero Distributors?

Check the icons below to find the distributor that’s best for you!

Australia and New Zealand

Core Electronics - New Raspberry Pi Zero Distributors

PiAustralia Raspberry Pi - New Raspberry Pi Zero Distributors

South Africa

PiShop - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in South Africa, as we are waiting for ICASA Certification.

Denmark, Sweden, Finland, and Norway

JKollerup - New Raspberry Pi Zero Distributors

electro:kit - New Raspberry Pi Zero Distributors

Germany and Switzerland

sertronics - New Raspberry Pi Zero Distributors

pi-shop - New Raspberry Pi Zero Distributors

Poland

botland - New Raspberry Pi Zero Distributors

Greece

nettop - New Raspberry Pi Zero Distributors

Italy

Japan

ksy - New Raspberry Pi Zero Distributors

switch science - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in Japan as we are waiting for TELEC Certification.

Malaysia

cytron - New Raspberry Pi Zero Distributors

Please note: Pi Zero W is not currently available to buy in Malaysia as we are waiting for SIRIM Certification

Canada and USA

buyapi - New Raspberry Pi Zero Distributors

Get your Pi Zero

For full product details, plus a complete list of Pi Zero distributors, visit the Pi Zero W page.

Awesome feature image GIF credit goes to Justin Mezzell

The post 250,000 Pi Zero W units shipped and more Pi Zero distributors announced appeared first on Raspberry Pi.

Querying OpenStreetMap with Amazon Athena

Post Syndicated from Seth Fitzsimmons original https://aws.amazon.com/blogs/big-data/querying-openstreetmap-with-amazon-athena/

This is a guest post by Seth Fitzsimmons, member of the 2017 OpenStreetMap US board of directors. Seth works with clients including the Humanitarian OpenStreetMap Team, Mapzen, the American Red Cross, and World Bank to craft innovative geospatial solutions.

OpenStreetMap (OSM) is a free, editable map of the world, created and maintained by volunteers and available for use under an open license. Companies and non-profits like Mapbox, Foursquare, Mapzen, the World Bank, the American Red Cross and others use OSM to provide maps, directions, and geographic context to users around the world.

In the 12 years of OSM’s existence, editors have created and modified several billion features (physical things on the ground like roads or buildings). The main PostgreSQL database that powers the OSM editing interface is now over 2TB and includes historical data going back to 2007. As new users join the open mapping community, more and more valuable data is being added to OpenStreetMap, requiring increasingly powerful tools, interfaces, and approaches to explore its vastness.

This post explains how anyone can use Amazon Athena to quickly query publicly available OSM data stored in Amazon S3 (updated weekly) as an AWS Public Dataset. Imagine that you work for an NGO interested in improving knowledge of and access to health centers in Africa. You might want to know what’s already been mapped, to facilitate the production of maps of surrounding villages, and to determine where infrastructure investments are likely to be most effective.

Note: If you run all the queries in this post, you will be charged approximately $1 based on the number of bytes scanned. All queries used in this post can be found in this GitHub gist.

What is OpenStreetMap?

As an open content project, regular OSM data archives are made available to the public via planet.openstreetmap.org in a few different formats (XML, PBF). This includes both snapshots of the current state of data in OSM as well as historical archives.

Working with “the planet” (as the data archives are referred to) can be unwieldy. Because it contains data spanning the entire world, the size of a single archive is on the order of 50 GB. The format is bespoke and extremely specific to OSM. The data is incredibly rich, interesting, and useful, but the size, format, and tooling can often make it very difficult to even start the process of asking complex questions.

Heavy users of OSM data typically download the raw data and import it into their own systems, tailored for their individual use cases, such as map rendering, driving directions, or general analysis. Now that OSM data is available in the Apache ORC format on Amazon S3, it’s possible to query the data using Athena without even downloading it.

How does Athena help?

You can use Athena along with data made publicly available via OSM on AWS. You don’t have to learn how to install, configure, and populate your own server instances and go through multiple steps to download and transform the data into a queryable form. Thanks to AWS and partners, a regularly updated copy of the planet file (available within hours of OSM’s weekly publishing schedule) is hosted on S3 and made available in a format that lends itself to efficient querying using Athena.

Asking questions with Athena involves registering the OSM planet file as a table and making SQL queries. That’s it. Nothing to download, nothing to configure, nothing to ingest. Athena distributes your queries and returns answers within seconds, even while querying over 9 years and billions of OSM elements.

You’re in control. S3 provides high availability for the data and Athena charges you per TB of data scanned. Plus, we’ve gone through the trouble of keeping scanning charges as small as possible by transcoding OSM’s bespoke format as ORC. All the hard work of transforming the data into something highly queryable and making it publicly available is done; you just need to bring some questions.

Registering Tables

The OSM Public Datasets consist of three tables:

  • planet
    Contains the current versions of all elements present in OSM.
  • planet_history
    Contains a historical record of all versions of all elements (even those that have been deleted).
  • changesets
    Contains information about changesets in which elements were modified (and which have a foreign key relationship to both the planet and planet_history tables).

To register the OSM Public Datasets within your AWS account so you can query them, open the Athena console (make sure you are using the us-east-1 region) to paste and execute the following table definitions:

planet

CREATE EXTERNAL TABLE planet (
  id BIGINT,
  type STRING,
  tags MAP<STRING,STRING>,
  lat DECIMAL(9,7),
  lon DECIMAL(10,7),
  nds ARRAY<STRUCT<ref: BIGINT>>,
  members ARRAY<STRUCT<type: STRING, ref: BIGINT, role: STRING>>,
  changeset BIGINT,
  timestamp TIMESTAMP,
  uid BIGINT,
  user STRING,
  version BIGINT
)
STORED AS ORCFILE
LOCATION 's3://osm-pds/planet/';

planet_history

CREATE EXTERNAL TABLE planet_history (
    id BIGINT,
    type STRING,
    tags MAP<STRING,STRING>,
    lat DECIMAL(9,7),
    lon DECIMAL(10,7),
    nds ARRAY<STRUCT<ref: BIGINT>>,
    members ARRAY<STRUCT<type: STRING, ref: BIGINT, role: STRING>>,
    changeset BIGINT,
    timestamp TIMESTAMP,
    uid BIGINT,
    user STRING,
    version BIGINT,
    visible BOOLEAN
)
STORED AS ORCFILE
LOCATION 's3://osm-pds/planet-history/';

changesets

CREATE EXTERNAL TABLE changesets (
    id BIGINT,
    tags MAP<STRING,STRING>,
    created_at TIMESTAMP,
    open BOOLEAN,
    closed_at TIMESTAMP,
    comments_count BIGINT,
    min_lat DECIMAL(9,7),
    max_lat DECIMAL(9,7),
    min_lon DECIMAL(10,7),
    max_lon DECIMAL(10,7),
    num_changes BIGINT,
    uid BIGINT,
    user STRING
)
STORED AS ORCFILE
LOCATION 's3://osm-pds/changesets/';

 

Under the Hood: Extract, Transform, Load

So, what happens behind the scenes to make this easier for you? In a nutshell, the data is transcoded from the OSM PBF format into Apache ORC.

There’s an AWS Lambda function (running every 15 minutes, triggered by CloudWatch Events) that watches planet.openstreetmap.org for the presence of weekly updates (using rsync). If that function detects that a new version has become available, it submits a set of AWS Batch jobs to mirror, transcode, and place it as the “latest” version. Code for this is available at osm-pds-pipelines GitHub repo.

To facilitate the transcoding into a format appropriate for Athena, we have produced an open source tool, OSM2ORC. The tool also includes an Osmosis plugin that can be used with complex filtering pipelines and outputs an ORC file that can be uploaded to S3 for use with Athena, or used locally with other tools from the Hadoop ecosystem.

What types of questions can OpenStreetMap answer?

There are many uses for OpenStreetMap data; here are three major ones and how they may be addressed using Athena.

Case Study 1: Finding Local Health Centers in West Africa

When the American Red Cross mapped more than 7,000 communities in West Africa in areas affected by the Ebola epidemic as part of the Missing Maps effort, they found themselves in a position where collecting a wide variety of data was both important and incredibly beneficial for others. Accurate maps play a critical role in understanding human communities, especially for populations at risk. The lack of detailed maps for West Africa posed a problem during the 2014 Ebola crisis, so collecting and producing data around the world has the potential to improve disaster responses in the future.

As part of the data collection, volunteers collected locations and information about local health centers, something that will facilitate treatment in future crises (and, more importantly, on a day-to-day basis). Combined with information about access to markets and clean drinking water and historical experiences with natural disasters, this data was used to create a vulnerability index to select communities for detailed mapping.

For this example, you find all health centers in West Africa (many of which were mapped as part of Missing Maps efforts). This is something that healthsites.io does for the public (worldwide and editable, based on OSM data), but you’re working with the raw data.

Here’s a query to fetch information about all health centers, tagged as nodes (points), in Guinea, Sierra Leone, and Liberia:

SELECT * from planet
WHERE type = 'node'
  AND tags['amenity'] IN ('hospital', 'clinic', 'doctors')
  AND lon BETWEEN -15.0863 AND -7.3651
  AND lat BETWEEN 4.3531 AND 12.6762;

Buildings, as “ways” (polygons, in this case) assembled from constituent nodes (points), can also be tagged as medical facilities. In order to find those, you need to reassemble geometries. Here you’re taking the average of all nodes that make up a building (which will be the approximate center point, which is close enough for this purpose). Here is a query that finds both buildings and points that are tagged as medical facilities:

-- select out nodes and relevant columns
WITH nodes AS (
  SELECT
    type,
    id,
    tags,
    lat,
    lon
  FROM planet
  WHERE type = 'node'
),
-- select out ways and relevant columns
ways AS (
  SELECT
    type,
    id,
    tags,
    nds
  FROM planet
  WHERE type = 'way'
    AND tags['amenity'] IN ('hospital', 'clinic', 'doctors')
),
-- filter nodes to only contain those present within a bounding box
nodes_in_bbox AS (
  SELECT *
  FROM nodes
  WHERE lon BETWEEN -15.0863 AND -7.3651
    AND lat BETWEEN 4.3531 AND 12.6762
)
-- find ways intersecting the bounding box
SELECT
  ways.type,
  ways.id,
  ways.tags,
  AVG(nodes.lat) lat,
  AVG(nodes.lon) lon
FROM ways
CROSS JOIN UNNEST(nds) AS t (nd)
JOIN nodes_in_bbox nodes ON nodes.id = nd.ref
GROUP BY (ways.type, ways.id, ways.tags)
UNION ALL
SELECT
  type,
  id,
  tags,
  lat,
  lon
FROM nodes_in_bbox
WHERE tags['amenity'] IN ('hospital', 'clinic', 'doctors');

You could go a step further and query for additional tags included with these (for example, opening_hours) and use that as a metric for measuring “completeness” of the dataset and to focus on additional data to collect (and locations to fill out).

Case Study 2: Generating statistics about mapathons

OSM has a history of holding mapping parties. Mapping parties are events where interested people get together and wander around outside, gathering and improving information about sites (and sights) that they pass. Another form of mapping party is the mapathon, which brings together armchair and desk mappers to focus on improving data in another part of the world.

Mapathons are a popular way of enlisting volunteers for Missing Maps, a collaboration between many NGOs, education institutions, and civil society groups that aims to map the most vulnerable places in the developing world to support international and local NGOs and individuals. One common way that volunteers participate is to trace buildings and roads from aerial imagery, providing baseline data that can later be verified by Missing Maps staff and volunteers working in the areas being mapped.

(Image and data from the American Red Cross)

Data collected during these events lends itself to a couple different types of questions. People like competition, so Missing Maps has developed a set of leaderboards that allow people to see where they stand relative to other mappers and how different groups compare. To facilitate this, hashtags (such as #missingmaps) are included in OSM changeset comments. To do similar ad hoc analysis, you need to query the list of changesets, filter by the presence of certain hashtags in the comments, and group things by username.

Now, find changes made during Missing Maps mapathons at George Mason University (using the #gmu hashtag):

SELECT *
FROM changesets
WHERE regexp_like(tags['comment'], '(?i)#gmu');

This includes all tags associated with a changeset, which typically include a mapper-provided comment about the changes made (often with additional hashtags corresponding to OSM Tasking Manager projects) as well as information about the editor used, imagery referenced, etc.

If you’re interested in the number of individual users who have mapped as part of the Missing Maps project, you can write a query such as:

SELECT COUNT(DISTINCT uid)
FROM changesets
WHERE regexp_like(tags['comment'], '(?i)#missingmaps');

25,610 people (as of this writing)!

Back at GMU, you’d like to know who the most prolific mappers are:

SELECT user, count(*) AS edits
FROM changesets
WHERE regexp_like(tags['comment'], '(?i)#gmu')
GROUP BY user
ORDER BY count(*) DESC;

Nice job, BrokenString!

It’s also interesting to see what types of features were added or changed. You can do that by using a JOIN between the changesets and planet tables:

SELECT planet.*, changesets.tags
FROM planet
JOIN changesets ON planet.changeset = changesets.id
WHERE regexp_like(changesets.tags['comment'], '(?i)#gmu');


Using this as a starting point, you could break down the types of features, highlight popular parts of the world, or do something entirely different.

Case Study 3: Building Condition

With building outlines having been produced by mappers around (and across) the world, local Missing Maps volunteers (often from local Red Cross / Red Crescent societies) go around with Android phones running OpenDataKit  and OpenMapKit to verify that the buildings in question actually exist and to add additional information about them, such as the number of stories, use (residential, commercial, etc.), material, and condition.

This data can be used in many ways: it can provide local geographic context (by being included in map source data) as well as facilitate investment by development agencies such as the World Bank.

Here are a collection of buildings mapped in Dhaka, Bangladesh:

(Map and data © OpenStreetMap contributors)

For NGO staff to determine resource allocation, it can be helpful to enumerate and show buildings in varying conditions. Building conditions within an area can be a means of understanding where to focus future investments.

Querying for buildings is a bit more complicated than working with points or changesets. Of the three core OSM element types—node, way, and relation, only nodes (points) have geographic information associated with them. Ways (lines or polygons) are composed of nodes and inherit vertices from them. This means that ways must be reconstituted in order to effectively query by bounding box.

This results in a fairly complex query. You’ll notice that this is similar to the query used to find buildings tagged as medical facilities above. Here you’re counting buildings in Dhaka according to building condition:

-- select out nodes and relevant columns
WITH nodes AS (
  SELECT
    id,
    tags,
    lat,
    lon
  FROM planet
  WHERE type = 'node'
),
-- select out ways and relevant columns
ways AS (
  SELECT
    id,
    tags,
    nds
  FROM planet
  WHERE type = 'way'
),
-- filter nodes to only contain those present within a bounding box
nodes_in_bbox AS (
  SELECT *
  FROM nodes
  WHERE lon BETWEEN 90.3907 AND 90.4235
    AND lat BETWEEN 23.6948 AND 23.7248
),
-- fetch and expand referenced ways
referenced_ways AS (
  SELECT
    ways.*,
    t.*
  FROM ways
  CROSS JOIN UNNEST(nds) WITH ORDINALITY AS t (nd, idx)
  JOIN nodes_in_bbox nodes ON nodes.id = nd.ref
),
-- fetch *all* referenced nodes (even those outside the queried bounding box)
exploded_ways AS (
  SELECT
    ways.id,
    ways.tags,
    idx,
    nd.ref,
    nodes.id node_id,
    ARRAY[nodes.lat, nodes.lon] coordinates
  FROM referenced_ways ways
  JOIN nodes ON nodes.id = nd.ref
  ORDER BY ways.id, idx
)
-- query ways matching the bounding box
SELECT
  count(*),
  tags['building:condition']
FROM exploded_ways
GROUP BY tags['building:condition']
ORDER BY count(*) DESC;


Most buildings are unsurveyed (125,000 is a lot!), but of those that have been, most are average (as you’d expect). If you were to further group these buildings geographically, you’d have a starting point to determine which areas of Dhaka might benefit the most.

Conclusion

OSM data, while incredibly rich and valuable, can be difficult to work with, due to both its size and its data model. In addition to the time spent downloading large files to work with locally, time is spent installing and configuring tools and converting the data into more queryable formats. We think Amazon Athena combined with the ORC version of the planet file, updated on a weekly basis, is an extremely powerful and cost-effective combination. This allows anyone to start querying billions of records with simple SQL in no time, giving you the chance to focus on the analysis, not the infrastructure.

To download the data and experiment with it using other tools, the latest OSM ORC-formatted file is available via OSM on AWS at s3://osm-pds/planet/planet-latest.orcs3://osm-pds/planet-history/history-latest.orc, and s3://osm-pds/changesets/changesets-latest.orc.

We look forward to hearing what you find out!

A Serverless Authentication System by Jumia

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/a-serverless-authentication-system-by-jumia/

Jumia Group is an ecosystem of nine different companies operating in 22 different countries in Africa. Jumia employs 3000 people and serves 15 million users/month.

Want to secure and centralize millions of user accounts across Africa? Shut down your servers! Jumia Group unified and centralized customer authentication on nine digital services platforms, operating in 22 (and counting) countries in Africa, totaling over 120 customer and merchant facing applications. All were unified into a custom Jumia Central Authentication System (JCAS), built in a timely fashion and designed using a serverless architecture.

In this post, we give you our solution overview. For the full technical implementation, see the Jumia Central Authentication System post on the Jumia Tech blog.

The challenge

A group-level initiative was started to centralize authentication for all Jumia users in all countries for all companies. But it was impossible to unify the operational databases for the different companies. Each company had its own user database with its own technological implementation. Each company alone had yet to unify the logins for their own countries. The effects of deduplicating all user accounts were yet to be determined but were considered to be large. Finally, there was no team responsible for manage this new project, given that a new dedicated infrastructure would be needed.

With these factors in mind, we decided to design this project as a serverless architecture to eliminate the need for infrastructure and a dedicated team. AWS was immediately considered as the best option for its level of
service, intelligent pricing model, and excellent serverless services.

The goal was simple. For all user accounts on all Jumia Group websites:

  • Merge duplicates that might exist in different companies and countries
  • Create a unique login (username and password)
  • Enrich the user profile in this centralized system
  • Share the profile with all Jumia companies

Requirements

We had the following initial requirements while designing this solution on the AWS platform:

  • Secure by design
  • Highly available via multimaster replication
  • Single login for all platforms/countries
  • Minimal performance impact
  • No admin overhead

Chosen technologies

We chose the following AWS services and technologies to implement our solution.

Amazon API Gateway

Amazon API Gateway is a fully managed service, making it really simple to set up an API. It integrates directly with AWS Lambda, which was chosen as our endpoint. It can be easily replicated to other regions, using Swagger import/export.

AWS Lambda

AWS Lambda is the base of serverless computing, allowing you to run code without worrying about infrastructure. All our code runs on Lambda functions using Python; some functions are
called from the API Gateway, others are scheduled like cron jobs.

Amazon DynamoDB

Amazon DynamoDB
is a highly scalable, NoSQL database with a good API and a clean pricing model. It has great scalability as well as high availability, and fits the serverless model we aimed for with JCAS.

AWS KMS

AWS KMS was chosen as a key manager to perform envelope encryption. It’s a simple and secure key manager with multiple encryption functionalities.

Envelope encryption

Oversimplifying envelope encryption is when you encrypt a key rather than the data itself. That encrypted key, which was used to encrypt the data, may now be stored with the data itself on your persistence layer since it doesn’t decrypt the data if compromised. For more information, see How Envelope Encryption Works with Supported AWS Services.

Envelope encryption was chosen given that master keys have a 4 KB limit for data to be encrypted or decrypted.

Amazon SQS

Amazon SQS is an inexpensive queuing service with dynamic scaling, 14-day retention availability, and easy management. It’s the perfect choice for our needs, as we use queuing systems only as a
fallback when saving data to remote regions fails. All the features needed for those fallback cases were covered.

JWT

We also use JSON web tokens for encoding and signing communications between JCAS and company servers. It’s another layer of security for data in transit.

Data structure

Our database design was pretty straightforward. DynamoDB records can be accessed by the primary key and allow access using secondary indexes as well. We created a UUID for each user on JCAS, which is used as a primary key
on DynamoDB.

Most data must be encrypted so we use a single field for that data. On the other hand, there’s data that needs to be stored in separate fields as they need to be accessed from the code without decryption for lookups or basic checks. This indexed data was also stored, as a hash or plain, outside the main ciphered blob store.

We used a field for each searchable data piece:

  • Id (primary key)
  • main phone (indexed)
  • main email (indexed)
  • account status
  • document timestamp

To hold the encrypted data we use a data dictionary with two main dictionaries, info with the user’s encrypted data and keys with the key for each AWS KMS region to decrypt the info blob.
Passwords are stored in two dictionaries, old_hashes contains the legacy hashes from the origin systems and secret holds the user’s JCAS password.

Here’s an example:
data_at_rest

Security

Security is like an onion, it needs to be layered. That’s what we did when designing this solution. Our design makes all of our data unreadable at each layer of this solution, while easing our compliance needs.

A field called data stores all personal information from customers. It’s encrypted using AES256-CBC with a key generated and managed by AWS KMS. A new data key is used for each transaction. For communication between companies and the API, we use API Keys, TLS and JWT, in the body to ensure that the post is signed and verified.

Data flow

Our second requirement on JCAS was system availability, so we designed some data pipelines. These pipelines allow multi-region replication, which evades collision using idempotent operations on all endpoints. The only technology added to the stack was Amazon SQS. On SQS queues, we place all the items we aren’t able to replicate at the time of the client’s request.

JCAS may have inconsistencies between regions, caused by network or region availability; by using SQS, we have workers that synchronize all regions as soon as possible.

Example with two regions

We have three Lambda functions and two SQS queues, where:

1) A trigger Lambda function is called for the DynamoDB stream. Upon changes to the user’s table, it tries to write directly to another DynamoDB table in the second region, falling back to writing to two SQS queues.

fig1

2) A scheduled Lambda function (cron-style) checks a SQS queue for items and tries writing them to the DynamoDB table that potentially failed.

fig2

3) A cron-style Lambda function checks the SQS queue, calling KMS for any items, and fixes the issue.

fig3

The following diagram shows the full infrastrucure (for clarity, this diagram leaves out KMS recovery).

CAS Full Diagram

Results

Upon going live, we noticed a minor impact in our response times. Note the brown legend in the images below.

lat_1

This was a cold start, as the infrastructure started to get hot, response time started to converge. On 27 April, we were almost at the initial 500ms.
lat_2

It kept steady on values before JCAS went live (≈500ms).
lat_3

As of the writing of this post, our response time kept improving (dev changed the method name and we changed the subdomain name).
lat_4

Customer login used to take ≈500ms and it still takes ≈500ms with JCAS. Those times have improved as other components changed inside our code.

Lessons learned

  • Instead of the standard cross-region replication, creating your own DynamoDB cross-region replication might be a better fit for you, if your data and applications allow it.
  • Take some time to tweak the Lambda runtime memory. Remember that it’s billed per 100ms, so it saves you money if you have it run near a round number.
  • KMS takes away the problem of key management with great security by design. It really simplifies your life.
  • Always check the timestamp before operating on data. If it’s invalid, save the money by skipping further KMS, DynamoDB, and Lambda calls. You’ll love your systems even more.
  • Embrace the serverless paradigm, even if it looks complicated at first. It will save your life further down the road when your traffic bursts or you want to find an engineer who knows your whole system.

Next steps

We are going to leverage everything we’ve done so far to implement SSO in Jumia. For a future project, we are already testing OpenID connect with DynamoDB as a backend.

Conclusion

We did a mindset revolution in many ways.
Not only we went completely serverless as we started storing critical info on the cloud.
On top of this we also architectured a system where all user data is decoupled between local systems and our central auth silo.
Managing all of these critical systems became far more predictable and less cumbersome than we thought possible.
For us this is the proof that good and simple designs are the best features to look out for when sketching new systems.

If we were to do this again, we would do it in exactly the same way.

Daniel Loureiro (SecOps) & Tiago Caxias (DBA) – Jumia Group

WikiLeaks Releases CIA Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/wikileaks_relea.html

WikiLeaks just released a cache of 8,761 classified CIA documents from 2012 to 2016, including details of its offensive Internet operations.

I have not read through any of them yet. If you see something interesting, tell us in the comments.

EDITED TO ADD: There’s a lot in here. Many of the hacking tools are redacted, with the tar files and zip archives replaced with messages like:

::: THIS ARCHIVE FILE IS STILL BEING EXAMINED BY WIKILEAKS. :::

::: IT MAY BE RELEASED IN THE NEAR FUTURE. WHAT FOLLOWS IS :::
::: AN AUTOMATICALLY GENERATED LIST OF ITS CONTENTS: :::

Hopefully we’ll get them eventually. The documents say that the CIA — and other intelligence services — can bypass Signal, WhatsApp and Telegram. It seems to be by hacking the end-user devices and grabbing the traffic before and after encryption, not by breaking the encryption.

New York Times article.

EDITED TO ADD: Some details from The Guardian:

According to the documents:

  • CIA hackers targeted smartphones and computers.
  • The Center for Cyber Intelligence is based at the CIA headquarters in Virginia but it has a second covert base in the US consulate in Frankfurt which covers Europe, the Middle East and Africa.
  • A programme called Weeping Angel describes how to attack a Samsung F8000 TV set so that it appears to be off but can still be used for monitoring.

I just noticed this from the WikiLeaks page:

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

So it sounds like this cache of documents wasn’t taken from the CIA and given to WikiLeaks for publication, but has been passed around the community for a while — and incidentally some part of the cache was passed to WikiLeaks. So there are more documents out there, and others may release them in unredacted form.

Wired article. Slashdot thread. Two articles from the Washington Post.

EDITED TO ADD: This document talks about Comodo version 5.X and version 6.X. Version 6 was released in Feb 2013. Version 7 was released in Apr 2014. This gives us a time window of that page, and the cache in general. (WikiLeaks says that the documents cover 2013 to 2016.)

If these tools are a few years out of date, it’s similar to the NSA tools released by the “Shadow Brokers.” Most of us thought the Shadow Brokers were the Russians, specifically releasing older NSA tools that had diminished value as secrets. Could this be the Russians as well?

EDITED TO ADD: Nicholas Weaver comments.

EDITED TO ADD (3/8): These documents are interesting:

The CIA’s hand crafted hacking techniques pose a problem for the agency. Each technique it has created forms a “fingerprint” that can be used by forensic investigators to attribute multiple different attacks to the same entity.

This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible. As soon one murder in the set is solved then the other murders also find likely attribution.

The CIA’s Remote Devices Branch‘s UMBRAGE group collects and maintains a substantial library of attack techniques ‘stolen’ from malware produced in other states including the Russian Federation.

With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the “fingerprints” of the groups that the attack techniques were stolen from.

UMBRAGE components cover keyloggers, password collection, webcam capture, data destruction, persistence, privilege escalation, stealth, anti-virus (PSP) avoidance and survey techniques.

This is being spun in the press as the CIA is pretending to be Russia. I’m not convinced that the documents support these allegations. Can someone else look at the documents. I don’t like my conclusion that WikiLeaks is using this document dump as a way to push their own bias.

Putlocker Loses Domain Name Following Court Order

Post Syndicated from Ernesto original https://torrentfreak.com/putlocker-loses-domain-name-following-court-order-170228/

putlockerisWith dozens of millions of monthly views, Putlocker.is is the go-to video streaming site for many people.

Up until last weekend, the site was ranked the 252nd most-visited website on the Internet and it’s particularly popular in the United States, Canada, Australia and South Africa.

As one of the largest ‘pirate sites’ on the Internet, Putlocker is a thorn in the side of rightsholders. It’s also on the radar of the US Government after the Office of the US Trade Representative put it on its annual list of “notorious markets,” but actually cited an incorrect domain.

This week another domain issue cropped up for the site. After losing its Putlocker.is domain name late last year, the site’s recent Putlockers.ch fallback is now gone as well.

Users who try to access the site will see that it no longer loads. A Whois search reveals that the domain has been taken over by the registrar EuroDNS, who’ve pointed it to a 127.0.0.1 blackhole.

Putlockers.ch now owned by EuroDNS

TorrentFreak reached out to EuroDNS Chief Legal Officer Luc Seufer who informed us that they were required to take this drastic measure following an order from the Tribunal d’arrondissement de Luxembourg.

The court rendered a decision in favor of the Belgian Entertainment Association last week, which required the registrar to suspend the domain. To avoid the Putlocker operator from taking it to another registrar, EuroDNS is now listed as the owner.

“The owner modification was the sole means we had at our disposal to comply with the decision which requires that EuroDNS prevent any ‘reactivation’ of this domain name until its expiration date,” Seufer informs us.

“Our customer has been duly notified and provided with a copy of the decision,” he adds.

The Putlocker team has yet to comment on the issue. The site’s official Facebook page hasn’t been updated since the downtime, despite a barrage of questions from users. The most recent message is from last week, referring to an earlier ‘attack.’

At the time the site also warned not to trust various copycats, which ironically are widely promoted elsewhere on the Facebook page now.

To find out more about the nature of the blocking order and other potential targets we contacted the Belgian Entertainment Association. However, at the time of publication, we have yet to receive a reply.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How the Media Influences Our Fear of Terrorism

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/how_the_media_i.html

Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.

This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.

Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.

There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.

If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.

Novelty plus dread plus a good story equals overreaction.

It’s not just murders. It’s flying vs. driving: the former is much safer, but the latter is more spectacular when it occurs.

Classifying Elections as "Critical Infrastructure"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/should_election.html

I am co-author on a paper discussing whether elections be classified as “critical infrastructure” in the US, based on experiences in other countries:

Abstract: With the Russian government hack of the Democratic National Convention email servers, and further leaks expected over the coming months that could influence an election, the drama of the 2016 U.S. presidential race highlights an important point: Nefarious hackers do not just pose a risk to vulnerable companies, cyber attacks can potentially impact the trajectory of democracies. Yet, to date, a consensus has not been reached as to the desirability and feasibility of reclassifying elections, in particular voting machines, as critical infrastructure due in part to the long history of local and state control of voting procedures. This Article takes on the debate in the U.S. using the 2016 elections as a case study but puts the issue in a global context with in-depth case studies from South Africa, Estonia, Brazil, Germany, and India. Governance best practices are analyzed by reviewing these differing approaches to securing elections, including the extent to which trend lines are converging or diverging. This investigation will, in turn, help inform ongoing minilateral efforts at cybersecurity norm building in the critical infrastructure context, which are considered here for the first time in the literature through the lens of polycentric governance.

The paper was speculative, but now it’s official. The U.S. election has been classified as critical infrastructure. I am tentatively in favor of this, but what really matter is what happens now. What does this mean? What sorts of increased security will election systems get? Will we finally get rid of computerized touch-screen voting?

EDITED TO ADD (1/16): This is a good article.

The Nest: hidden music atop Table Mountain

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/the-nest-hidden-music-atop-table-mountain/

Located at the lookout at the summit of Table Mountain’s Kloof Corner hiking route, The Nest was a beautifully crafted replica of a rock that sat snugly alongside the trail. It would have been easy to pass it without noticing the enhancements: the USB port, headphone socket, and microphone. After all, what would such things be doing by a mountain trail in Cape Town, South Africa?

bateleur the nest

However, if you were a follower of interesting tech builds or independent music, or a member of the Geocaching community (it’s highly likely that the project was inspired in part by Geocaching), you may have been aware of The Nest as a unique way of sharing the self-titled debut LP from South African band, Bateleur.

Bateleur

Yes, this may seem like something of a publicity stunt. A cheaper version of U2 forcing their album onto every iPhone simply to ‘get through the noise’ and make sure their music was heard. But listen to Bateleur’s LP and I’ll guarantee that there’s no place you’d rather be than sat atop a mountain with the fresh air and beautiful vista before you.

Kloof Corner Cape Town

Image courtesy of trailing ahead

In my opinion, this build was not so much a publicity stunt as a public service.

Once The Nest was discovered, two whistles would act as a trigger to switch on the Raspberry Pi heart within the semi-translucent faux rock, and a light show, previously hidden from view, would begin to play. A pulsing ring of green lights would indicate when the device was ready for you to insert a USB drive and retrieve the album, while a rainbow pattern would let you know when the download was complete.





You could then either continue on your merry way or take the time to sit back and enjoy the view.

Now you may wonder why I have written this blog post in the past tense, given how recently The Nest was installed. Quite simply put, someone felt the need to vandalise and destroy it. Why? Your guess is as good as ours.

However short-lived The Nest project may have been, I’d like to thank Bateleur for their build. And if you’d like to see the creation of The Nest, here’s a wonderful video. Enjoy.

Bateleur : The Nest

To release their self-titled debut LP, Bateleur created The Nest. All you need to do is plug in. And climb a mountain. For more information visit bateleur.xyz __ Credits: All footage, editing & colour grading by Nick Burton-Moore. Hiker: Anine Kirsten Concept by Bateleur Music: Bateleur – Over (Again)

The post The Nest: hidden music atop Table Mountain appeared first on Raspberry Pi.

Opening Soon – AWS Office in Dubai to Support Cloud Growth in UAE

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/opening-soon-aws-office-in-dubai-to-support-cloud-growth-in-uae/

The AWS office in Dubai, UAE will open on January 1, 2017.

We’ve been working with the Dubai Investment Development Agency (Dubai FDI) to launch the office, and plan to support startups, government institutions, and some of the Middle East’s historic and most established enterprises as they make the transition to the AWS Cloud.

Resources in Dubai
The office will be staffed with account managers, solutions architects, partner managers, professional services consultant, and support staff to allow customers to interact with AWS in their local setting and language.

In addition to access to the AWS team, customers in the Middle East have access to several important AWS programs including AWS Activate and AWS Educate:

  • AWS Activate is designed to provide startups with resources that will help them to get started on AWS, including up to $100,000 (USD) in AWS promotional credits.
  • AWS Educate is a global initiative designed to provide students and educators with the resources needed to accelerate cloud-based learning endeavors.
  • AWS Training and Certification helps technologists to develop the skills to design, deploy, and operate infrastructure in the AWS Cloud.

We are also planning to host AWSome days and other large-scale training events in the region.

Customers in the Middle East
Middle Eastern organizations were among the earliest adopters of cloud services when AWS launched in 2006. Customers based in the region are using AWS to run everything from development and test environments to Big Data analytics, from mobile, web and social applications to enterprise business applications and mission critical workloads.

AWS counts some of the UAE’s most well-known and fastest growing businesses as customers, including PayFort and Careem, as well as government institutions and some of the largest companies in the Middle East, such as flydubai and Middle East Broadcasting Center.

Careem is the leading ride booking app in the Middle East and North Africa. Launched in 2012, Careem runs totally on AWS and over the past three years has grown by 10x in size every year. This is growth that would not have been possible without AWS. After starting with one city, Dubai, Careem now serves millions of commuters in 43 cities across the Middle East, North Africa and Asia. Careem uses over 500 EC2 instances as well as a number of other services such as Amazon S3, Amazon DynamoDB, Elastic Beanstalk and others.

PayFort is a startup based in the United Arab Emirates that provides payment solutions to customers across the Middle East through its payments gateway, FORT. The platform enables organizations to accept online payments via debit and credit cards. PayFort counts Etihad Airways, Ferrari World, and Souq.com among its customers. PayFort chose to run FORT entirely on AWS technologies and as a result is saving 32% over their on-premises costs. Although cost was key for PayFort, it turns our that they chose AWS due to the high level of security that they could achieve with the platform. Compliance with Payment Card Industry Data Security Standard (PCI DSS) and International Organization for Standards (ISO) 27001 are central to PayFort’s payment services, both of which are available with AWS (we were actually the first cloud provider to reach compliance with version 3.1 of PCI DSS).

Fly Dubai is the leading low-cost airline in the Middle East, with over 90 destinations, and was launched by the government of Dubai in 2009. flydubai chose to build their online check-in platform on AWS and went from design to production in four months where it is now being used by thousands of passengers a day – this timeline would not have been possible without the cloud. Given the seasonal fluctuations in demand for flights, flydubai also needs an IT provider that allows it to cope with spikes in demand. Using AWS allows them to do this and lead times for new infrastructure services have been reduced from up to 10 weeks to a matter of hours.

Partners
The AWS Partner Network of consulting and technology partners in the region helps our customers to get the most from the cloud. The network includes global members like CSC as well as prominent regional members such as Redington.

Redington is an AWS Consulting Partner and is the Master Value Added Distributor for AWS in Middle East and North Africa. They are also an Authorized Commercial Reseller of AWS cloud technologies. Redington is helping organizations in the MEA region with cloud assessment, cloud readiness, design, implementation, migration, deployment and optimization of cloud resources. They also have an ecosystem of partners including ISV’s with experienced and certified AWS engineers with cross domain experience.

Join Us
This announcement is part of our continued expansion across Europe, the Middle East, and Asia. As part of our investment in these areas, we created over 10,000 new jobs in 2015. If you are interested in joining our team in Dubai or in any other location around the world, check out the Amazon Jobs site.

Jeff;

 

Using pgpool and Amazon ElastiCache for Query Caching with Amazon Redshift

Post Syndicated from Felipe Garcia original https://aws.amazon.com/blogs/big-data/using-pgpool-and-amazon-elasticache-for-query-caching-with-amazon-redshift/

Felipe Garcia and Hugo Rozestraten are Solutions Architects for Amazon Web Services

In this blog post, we’ll use a real customer scenario to show you how to create a caching layer in front of Amazon Redshift using pgpool and Amazon ElastiCache.

Almost every application, no matter how simple, uses some kind of database. With SQL queries pervasive, a lack of coordination between end users or applications can sometimes result in redundant executions. This redundancy wastes resources that could be allocated to other tasks.

For example, BI tools and applications consuming data from Amazon Redshift are likely to issue common queries. You can cache some of them to improve the end-user experience and reduce contention in the database. In fact, when you use good data modeling and classification policies, you can even save some money by reducing your cluster size.

What is caching?

In computing, a cache is a hardware or software component that stores data so future requests for that data can be served faster. The data stored in a cache might be the result of an earlier computation or the duplicate of data stored elsewhere. A cache hit occurs when the requested data is found in a cache; a cache miss occurs when it is not. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store. The more requests served from the cache, the faster the system performs.

Customer case: laboratory analysis

In a clinical analysis laboratory, a small team of 6 to 10 scientists (geneticists, doctors, and biologists) query around 2 million lines of genetic code looking for specific genetic modifications. The genes next to a modified gene are also of interest because they can confirm a disease or disorder.

The scientists simultaneously analyze one DNA sample and then hold a meeting to discuss their findings and reach a conclusion.

A Node.js web application contains the logic; it issues the queries against Amazon Redshift. Using the web application connected to Amazon Redshift, the team of scientists experienced latencies of around 10 seconds. When the architecture is modified to use pgpool, these scientists were able to return the same queries in less than 1 second (in other words, 10 times faster).

o_PgPool_1

Introducing pgpool

Pgpool is software that sits between your database clients and your database server(s). It acts as a reverse proxy, receiving connections from clients and forwarding them to the database servers. Originally written for PostgreSQL, pgpool has other interesting features besides caching: connection pooling, replication, load balancing, and queueing exceeding connections. Although we didn’t explore these features, we suspect they can be used with Amazon Redshift due to the compatibility between PostgreSQL and Amazon Redshift.

Pgpool can run in an Amazon EC2 instance or in your on-premises environment. For example, you might have a single EC2 instance for dev and test and a fleet of EC2 instances with Elastic Load Balancing and Auto Scaling in production.

The clinical analysis laboratory in our use case used the Psql (command line) and Node.js application to issue queries against Amazon Redshift and it worked as expected. However, we strongly recommend that you test pgpool with your PostgreSQL client before making any changes to your architecture.

Taking a look at the pgpool caching feature

The pgpool caching feature is disabled by default. It can be configured in two ways:

  • On-memory (shmem)
    • This is the default method if you set up the cache and make no changes. It’s slightly faster than Memcached and is easier to configure and maintain. On the other hand, in high-availability scenarios, you tend to waste memory and some database processing, because you cache the query per server and process the query for caching at least once for each server. For example, in a pgpool cluster with four servers, if you expect to have a 20 GB cache, you must provision 4 x m3.xlarge instances and pay four times the cache. Each query must be processed by the database at least four times to be cached in each server.
  • Memcached (memcached)
    • In this method, the cache is maintained externally from the server. The advantage is that the caching storage layer (Memcached) is decoupled from the cache processing layer (pgpool). This means you won’t waste server memory and database processing time because the queries are processed only and cached externally in Memcached.
    • You can run Memcached anywhere, but we suggest you use Amazon ElastiCache with Memcached. Amazon ElastiCache detects and replaces failed nodes automatically, thereby reducing the overhead associated with self-managed infrastructures. It provides a resilient system that mitigates the risk of overloaded databases, which slows website and application load times.

Caching queries with pgpool

The following flow chart shows how query caching works with pgpool:

o_PgPool_2

The following diagram shows the minimum architecture required to install and configure pgpool for a dev/test environment:

o_PgPool_3

The following diagram shows the recommended minimum architecture for a production environment:

o_PgPool_4

Prerequisites

For the steps in this post, we will use the AWS Command Line Interface (AWS CLI). If you want to use your Mac, Linux, or Microsoft Windows machine to follow along, make sure you have installed the AWS CLI installed.  To learn how, see Installing the AWS Command Line Interface.

Steps for installing and configuring pgpool

1. Setting up the variables:

IMAGEID=ami-c481fad3
KEYNAME=<set your key name here>

The IMAGEID variable is set to use an Amazon Linux AMI from the US East (N. Virginia) region.

Set the KEYNAME variable to the name of the EC2 key pair you will use. This key pair must have been created in the US East (N. Virginia) region.

If you will use a region other than US East (N. Virginia), update IMAGEID and KEYNAME accordingly.

2. Creating the EC2 instance:

aws ec2 create-security-group --group-name PgPoolSecurityGroup --description "Security group to allow access to pgpool"

MYIP=$(curl eth0.me -s | awk '{print $1"/32"}')

aws ec2 authorize-security-group-ingress --group-name PgPoolSecurityGroup --protocol tcp --port 5432 --cidr $MYIP

aws ec2 authorize-security-group-ingress --group-name PgPoolSecurityGroup --protocol tcp --port 22 --cidr $MYIP

INSTANCEID=$(aws ec2 run-instances \
	--image-id $IMAGEID \
	--security-groups PgPoolSecurityGroup \
	--key-name $KEYNAME \
	--instance-type m3.medium \
	--query 'Instances[0].InstanceId' \
	| sed "s/\"//g")

aws ec2 wait instance-status-ok --instance-ids $INSTANCEID

INSTANCEIP=$(aws ec2 describe-instances \
	--filters "Name=instance-id,Values=$INSTANCEID" \
	--query "Reservations[0].Instances[0].PublicIpAddress" \
	| sed "s/\"//g")

3. Creating the Amazon ElastiCache cluster:

aws ec2 create-security-group --group-name MemcachedSecurityGroup --description "Security group to allow access to Memcached"

aws ec2 authorize-security-group-ingress --group-name MemcachedSecurityGroup --protocol tcp --port 11211 --source-group PgPoolSecurityGroup

MEMCACHEDSECURITYGROUPID=$(aws ec2 describe-security-groups \
	--group-names MemcachedSecurityGroup \
	--query 'SecurityGroups[0].GroupId' | \
	sed "s/\"//g")

aws elasticache create-cache-cluster \
	--cache-cluster-id PgPoolCache \
	--cache-node-type cache.m3.medium \
	--num-cache-nodes 1 \
	--engine memcached \
	--engine-version 1.4.5 \
	--security-group-ids $MEMCACHEDSECURITYGROUPID

aws elasticache wait cache-cluster-available --cache-cluster-id PgPoolCache

4. Accessing the EC2 instance through SSH, and then updating and installing packages:

ssh -i <your pem file goes here> ec2-user@$INSTANCEIP

sudo yum update -y

sudo yum group install "Development Tools" -y

sudo yum install postgresql-devel libmemcached libmemcached-devel -y

5. Downloading the pgpool sourcecode tarball:

curl -L -o pgpool-II-3.5.3.tar.gz http://www.pgpool.net/download.php?f=pgpool-II-3.5.3.tar.gz

6. Extracting and compiling the source:

tar xvzf pgpool-II-3.5.3.tar.gz

cd pgpool-II-3.5.3

./configure --with-memcached=/usr/include/libmemcached-1.0

make

sudo make install

7. Making a copy of the sample conf that comes with pgpool to create our own pgpool.conf:

sudo cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf

8. Editing pgpool.conf:
Using your editor of choice, open /usr/local/etc/pgpool.conf, and then find and set the following parameters:

  • Set listen_addresses to *.
  • Set port to5432.
  • Set backend_hostname0 to the endpoint address of your Amazon Redshift cluster.
  • Set backend_port0 to 5439.
  • Set memory_cache_enabled to on.
  • Set memqcache_method to memcached.
  • Set memqcache_memcached_host to your Elasticache endpoint address.
  • Set memqcache_memcached_port to your Elasticache endpoint port.
  • Set log_connections to on
  • Set log_per_node_statement to on
  • Set pool_passwd to ‘‘.

The modified parameters in your config file should look like this:

listen_addresses = '*'

port = 5432

backend_hostname0 = '<your redshift endpoint goes here>'

backend_port0 = 5439

memory_cache_enabled = on

memqcache_method = 'memcached'

memqcache_memcached_host = '<your memcached endpoint goes here>'

memqcache_memcached_port = 11211

log_connections = on

log_per_node_statement = on

9. Setting up permissions:

sudo mkdir /var/run/pgpool

sudo chmod u+rw,g+rw,o+rw /var/run/pgpool

sudo mkdir /var/log/pgpool

sudo chmod u+rw,g+rw,o+rw /var/log/pgpool

10. Starting pgpool:

pgpool -n

pgpool is already listening on port 5432:

2016-06-21 16:04:15: pid 18689: LOG: Setting up socket for 0.0.0.0:5432
2016-06-21 16:04:15: pid 18689: LOG: Setting up socket for :::5432
2016-06-21 16:04:15: pid 18689: LOG: pgpool-II successfully started. version 3.5.3 (ekieboshi)

11. Testing the setup:
Now that pgpool is running, we will configure our Amazon Redshift client to point to the pgpool endpoint instead of the Amazon Redshift cluster endpoint. To get the endpoint address, you can use the console or the CLI to retrieve the public IP address of the EC2 instance  or you can just print the value we stored in the $INSTANCEIP variable.

#psql –h <pgpool endpoint address> -p 5432 –U <redshift username>

The first time we run the query, we see the following information in the pgpool log:

2016-06-21 17:36:33: pid 18689: LOG: DB node id: 0 backend pid: 25936 statement: select
      s_acctbal,
      s_name,
      p_partkey,
      p_mfgr,
      s_address,
      s_phone,
      s_comment
  from
      part,
      supplier,
      partsupp,
      nation,
      region
  where
      p_partkey = ps_partkey
      and s_suppkey = ps_suppkey
      and p_size = 5
      and p_type like '%TIN'
      and s_nationkey = n_nationkey
      and n_regionkey = r_regionkey
      and r_name = 'AFRICA'
      and ps_supplycost = (
          select 
              min(ps_supplycost)
          from
              partsupp,
              supplier,
              nation,
              region
          where
              p_partkey = ps_partkey,
              and s_suppkey = ps_suppkey,
              and s_nationkey = n_nationkey,
              and n_regionkey = r_regionkey,
              and r_name = 'AFRICA'
      )
  order by
      s_acctbal desc,
      n_name,
      s_name,
      p_partkey
  limit 100;

The first line in the log shows that the query is running directly on the Amazon Redshift cluster, so this is a cache miss. Executing the query against the database, it took 6814.595 ms to return the results.

If we run this query again, with the same predicates, we see a different result in the logs:

2016-06-21 17:40:19: pid 18689: LOG: fetch from memory cache
2016-06-21 17:40:19: pid 18689: DETAIL: query result fetched from cache. statement: 
select
      s_acctbal,
      s_name,
      p_partkey,
      p_mfgr,
      s_address,
      s_phone,
      s_comment
  from
      part,
      supplier,
      partsupp,
      nation,
      region
  where
      p_partkey = ps_partkey
      and s_suppkey = ps_suppkey
      and p_size = 5
      and p_type like '%TIN'
      and s_nationkey = n_nationkey
      and n_regionkey = r_regionkey
      and r_name = 'AFRICA'
      and ps_supplycost = (
          select 
              min(ps_supplycost)
          from
              partsupp,
              supplier,
              nation,
              region
          where
              p_partkey = ps_partkey,
              and s_suppkey = ps_suppkey,
              and s_nationkey = n_nationkey,
              and n_regionkey = r_regionkey,
              and r_name = 'AFRICA'
      )
  order by
      s_acctbal desc,
      n_name,
      s_name,
      p_partkey
  limit 100;

As the first two lines of the log show, now we are retrieving the results from the cache with the desired result, so this is a cache hit. The difference is huge: The query took only 247.719 ms. In other words, it’s running 30 times faster than in the previous scenario.

Understanding pgpool caching behavior

Pgpool uses your SELECT query as the key for the fetched results.

Caching behavior and invalidation can be configured in a couple ways:

  • Auto invalidation
    • By default, memqcache_auto_cache_invalidation is set to on. When you update a table in Amazon Redshift, the cache in pgpool is invalidated.
  • Expiration
    • memqcache_expire defines, in seconds, how long a result should stay in the cache. The default value is 0, which means infinite.
  • Black list and white list
    • white_memqcache_table_list
      • Comma-separated list of tables that should be cached. Regular expressions are accepted.
    • black_memqcache_table_list
      • Comma-separated list of tables that should not be cached. Regular expressions are accepted.
  • Bypassing Cache
    • /* NO QUERY CACHE */
      • If you specify the comment /* NO QUERY CACHE */ in your query, the query ignores pgpool cache and fetches the result from the database.

If pgpool doesn’t reach the cache due to name resolution or routing issues, for example, it falls back to the database endpoint and doesn’t use any cache.

Conclusion

It is easy to implement a caching solution using pgpool with Amazon Redshift and Amazon Elasticache. This solution significantly improves the end-user experience and alleviate the load on your cluster by orders of magnitude.

This post shows just one example of how pgpool and this caching architecture can help you. To learn more about the pgpool caching feature, see the pgpool documentation here and here.

Happy querying (and caching, of course). If you have questions or suggestions, please leave a comment below.


Related

Query Routing and Rewrite: Introducing pgbouncer-rr for Amazon Redshift and PostgreSQL

Author_pic_bob_strahan_resized_1a

Putlocker.is Mysteriously Goes Down

Post Syndicated from Ernesto original https://torrentfreak.com/putlocker-is-mysteriously-goes-down-161014/

putlockerisWith dozens of millions of monthly views, Putlocker.is is the go-to video streaming site for many people.

The site ranks among the 250 most-visited websites on the Internet and is particularly popular in the United States, Canada, Australia and South Africa.

However, starting three days ago the site suddenly became inaccessible. While a brief downtime stint is nothing unusual for these type of sites, the prolonged downtime is cause for concern among users.

Many are voicing their frustration after being confronted by yet another CloudFlare downtime banner, showing them that the site’s servers are still unresponsive.

“Putlocker is down so I no longer have a reason to live,” one user dramatically announced.

“Putlocker has been down the whole day. I’m going through serious withdrawals,” another added.

Putlocker.is down

putlockercf

Looking for answers, TorrentFreak tried to contact the Putlocker.is team on their known support address. However, this email returned an error message as well.

As far as we can see the current problems are related to the site’s servers. The domain name itself is operating as it should and hasn’t been seized or suspended by the registrar.

Interestingly, the downtime occurs right after Hollywood’s MPAA reported the site to the United States Trade Representative, describing it as one of the largest piracy threats.

“Putlocker.is is the most visited infringing English language video streaming link site in the world,” the MPAA wrote.

According to the MPAA the site is believed to operate from Vietnam, with its servers being hosted at the Swiss company Private Layer. Whether there’s a direct relation between the report and the downtime is unclear though.

Meanwhile, several “other” Putlockers are seizing the opportunity to gain some traction, at least for the time being. Whether the real Putlocker.is will return as well has yet to be seen.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building Computer Labs in Western Africa

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/building-computer-labs-in-western-africa/

Back in 2014, Helen covered the story of Dominique Laloux and the first Raspberry Pi computer room in Togo, West Africa.

Having previously worked alongside friends to set up the Kuma Computer Center, Dominique and the team moved on to build another computer room in Kuma Adamé.

Both builds were successful, proving the need for such resources within an area where, prior to 2012, 75% of teachers had never used a computer.

Dominique has since been back in contact via our forum; he informed us of another successful build, again in Togo, converting an old toilet block into a Raspberry Pi computer lab.

Togo RPi Lab

The blank canvas…

The team had their work cut out, stripping the building of its inner walls, laying down a new concrete floor, and installing windows. 

Togo RPi

Some serious climbing was needed…

Electricity and LAN were installed next, followed by welded tables and, eventually, the equipment.

Togo RPi

Local teachers and students helped to set up the room

The room was finally kitted out with 21 Raspberry Pis. This would allow for one computer per student, up to a maximum of 20, as well as one for the teacher’s desk, which would power an LED projector.

The room also houses a laptop with a scanner, and a networked printer.

The project took four weeks to complete, and ended with a two-week training session for 25 teachers. 

Togo RPi

Forget the summer holidays: each teacher showed up every day

Dominique believes very strongly in the project, and in the positive influence it has had on the area. He writes:

I am now convinced that the model of Raspberry Pi computer labs is an ideal solution to bring ICT to small schools in developing countries, where resources are scarce.

Not only is he continuing to raise funds to build more labs, he’s also advising other towns who want to build their own. Speaking of the growth of awareness over the past year, he explained, “I was so happy to advise another community 500 km away on how to install their own microcomputer room, based on the same model.”

And his future plans?

My goal is now to raise enough funds to set up one computer room in a school each year for the foreseeable future, hoping that other communities will want to copy the model and build their own at the same time.

We love seeing the progress Dominique and his team have made as they continue to build these important labs for communities in developing countries. Dominique’s hard work and determination is inspiring, and we look forward to seeing the students he and his team have helped to nurture continue to learn.

Togo RPi

The post Building Computer Labs in Western Africa appeared first on Raspberry Pi.

MPAA Anti-Piracy Cutbacks Lead to “Bullying” Lawsuit

Post Syndicated from Andy original https://torrentfreak.com/mpaa-anti-piracy-cutbacks-lead-to-bullying-lawsuit-160804/

mpaThe Australian Federation Against Copyright Theft was viewed by many as the country’s leading anti-piracy outfit. Financed by the major Hollywood studios, AFACT was front and center of most major copyright battles Down Under since its inception in 2004.

Perhaps most notably, AFACT was the group that spearheaded the prolonged and ultimately unsuccessful legal action that aimed to force local ISP iiNet to disconnect Internet users for alleged piracy.

For several years, AFACT was headed up by Neil Gane, a former Hong Kong Police Inspector who had worked with the MPAA against piracy across Asia. In 2014, when AFACT became known by the more friendly name of the Australian Screen Association (ASA), Gane left the organization to return to Hong Kong.

There Gane headed up the newly created Asia Pacific Internet Centre (APIC), a regional anti-piracy, policy, research and training hub for the Motion Picture Association (MPA) Asia Pacific.

Gane was replaced as head of ASA/AFACT by Mark Day, a former regional legal counsel at the MPA and the group’s main representative in China. Between 2001 and 2009, Day oversaw multiple criminal and civil cases prosecuted by MPA members.

Now, however, Day’s career at the ASA appears to be over. After just a year in his new role, Day was fired from the top job. In response, he’s now suing his former employer and former AFACT chief Neil Gane for allegedly doing so illegally.

According to court papers filed in Federal Court and first reported by SMH, in 2015 the MPAA made a decision to significantly reduce ASA’s budget.

In response, ASA director Mike Ellis, a veteran of the MPA and its Asia Pacific president, decided to dismiss Day in November 2015 to take over the position himself. Day was on sick leave at the time.

Day later fought back, claiming through his lawyer that he’d been working in a hostile workplace and had been the victim of bullying. He’s now suing the ASA, Mike Ellis and Neil Gane, for discrimination and punishing him for exercising his workplace rights.

According to SMH, Day is seeking compensation for economic loss, psychological injury, pain, suffering, humiliation, and damage to his professional reputation.

While Day’s lawsuit could yield some interesting facts about the anti-piracy operations of the MPA, the dismissal of the former ASA boss in the face of MPAA cuts is the broader story.

As revealed in May this year, the MPAA is also set to withdraw funding from the UK’s Federation Against Copyright Theft before the end of 2016, ending a 30-year relationship with the group.

Local funding for FACT was withdrawn in favor of financing larger regional hubs with a wider remit, in FACT’s case the MPA’s EMEA (Europe, Middle East, Africa) hub in Brussels.

In ASA’s case, it’s clear that the MPA has decided that its recently-formed Asia Pacific Internet Centre (APIC) will be its regional anti-piracy powerhouse and where its local funding will be concentrated in future.

The MPA’s regional hubs are said to offer the studios “a nimble local presence and a direct relationship with local law enforcement.”

Meanwhile, the MPAA’s head office remains in Los Angeles.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Photocatalysis with a Raspberry Pi

Post Syndicated from Lorna Lynch original https://www.raspberrypi.org/blog/photocatalysis-raspberry-pi/

Access to clean, safe drinking water is a global problem: as water.org notes, 663 million people lack access to water that’s safe to drink. That’s twice the population of the United States, or one person in every ten. Additionally, a recent review of rural water system sustainability in eight countries in Africa, South Asia, and Central America found an average water project failure rate of 20-40 percent. It’s no surprise that the search for a solution to this crisis preoccupies scientists the world over, but what you may not have expected is that, in a lab in Cardiff University, researchers are using Raspberry Pi to help in their efforts to bring safe drinking water to some of the poorest areas of the world.

A tap set into a wall, with sign above reading "SAFE DRINKING WATER"

There are three processes involved in water purification, two of which are reasonably straightforward: filtration can remove particulate matter, while heating water to near 100°C kills bacteria. However, the third process — the removal of highly toxic hydrocarbons, typically from fertiliser and pesticide runoff — is very difficult and, currently, very expensive. The Cardiff group is working on a project to find a cheap, effective method of removing these hydrocarbons from water by means of photocatalysis. Essentially, this means they are finding a way to produce clean water using little more than sunlight, which is really pretty mind-blowing.

Here’s a picture of their experimental setup; you can see the Raspberry Pi in its case on the right-hand side.

A laboratory photocatalysis setup at Cardiff University: on a bench are a beaker of water dosed with methylene blue "pollutant" under UV LED illumination, semi-transparent tubing connecting the contents of the beaker to a flow cell, a Raspberry Pi, and other components.

Raspberry Pi in the lab

A cheap, readily available chemical, titanium dioxide, is spin-coated onto a glass wafer which sits in the bottom of the beaker with a UV LED above it. This wafer coating acts as a semiconductor; when UV photons from the LED strike it, its electrons become mobile, creating locations with positive charge and others with negative charge. As a result, both oxidation reactions and reduction reactions are set off. These reactions break down the hydrocarbons, leaving you with pure water, carbon dioxide, and hydrogen. The solution is pumped through a flow cell (you can see this in the centre of the picture), where an LED light source is shone through the stream and the amount of light passing through is registered by a photodiode. The photodiode turns this output into a voltage, which can be read by the Raspberry Pi with the help of an ADC.

The team are currently using two organic dyes, methyl orange and methylene blue, to simulate pollutants for the purposes of the experiment: it is possible to see the reaction take place with the naked eye, as the colour of the dye becomes progressively less saturated. A colourless solution means the “pollutants” have been entirely broken down. You can see both dyes in situ here:

Laboratory photocatalysis setups at Cardiff University: on a bench are a large LCD display with a desktop showing the Raspberry Pi logo, beakers of water dosed with methyl orange and methylene blue "pollutants", semi-transparent tubing connecting the beakers' contents to flow cells, a Raspberry Pi, and other components.

Experimental setup with methyl orange and methylene blue

In previous versions of the setup, it was necessary to use some very large, expensive pieces of equipment to drive the experiment and assess the rate and efficacy of the reaction (two power sources and a voltmeter, each of which cost several hundred pounds); the Raspberry Pi performs the same function for a fraction of the price, enabling multiple experiments to be run in the lab, and offering the possibility of building a neat, cost-effective unit for use in the real world in the future.

Several of the team have very personal reasons for being involved in the project: Eman Alghamdi is from Saudi Arabia, a country which, despite its wealth, struggles to supply water to its people. Her colleague Jess Mabin was inspired by spending time in Africa working with an anti-poverty charity. They hope to produce a device which will be both cheap to manufacture and rugged enough to be used in rural areas throughout the world.

Jess, a research scientist, smiles as she pipettes methylene blue into a beaker that is part of her group's photocatalysis setup.

Jess demonstrates the experiment: methylene blue going in!

As well as thoroughly testing the reaction rate and the lifespan of the wafer coating, the team are hoping to streamline their equipment by building their own version of a HAT to incorporate the ADC, the photodiode, and other components. Ultimately the Pi and its peripherals could form a small, rugged, cost-effective, essentially self-sustaining device which could be used all over the world to help produce clean, safe drinking water. We are really pleased to see the Raspberry Pi being used in this way, and we wish Jess, Eman, and their colleagues every success!

The post Photocatalysis with a Raspberry Pi appeared first on Raspberry Pi.

High IQ Countries Have Less Software Piracy, Research Finds

Post Syndicated from Ernesto original https://torrentfreak.com/high-iq-countries-have-less-software-piracy-research-finds-160619/

piratesdillemmaThere are hundreds of reasons why people may turn to piracy. A financial motive is often mentioned, as well as lacking legal alternatives.

A new study from a group of researchers now suggests that national intelligence can also be added to the list.

The researchers report their findings in a paper titled “Intelligence and Crime: A novel evidence for software piracy,” which offers some intriguing insights.

In a rather straightforward analysis, the research examines the link between national IQ scores and local software piracy rates, which are reported by the Business Software Alliance. As can be seen below, there’s a trend indicating that countries with a higher IQ have lower software rates.

“We find that intelligence has statistically significant negative impact on piracy rates,” the researchers confirm in their paper, drawing a causal conclusion.

National IQ and Piracy rates

piracyintelligenceiq

There are some notable outliers, such as China, where piracy rates and IQ are both relatively high. On the other end of the spectrum we find South Africa, with a low national IQ as well as low piracy rates.

The general trend, however, shows a direct relation between a country’s average IQ and the local software piracy rates.

“After controlling for the potential effect of outlier nations in the sample, software piracy rate declines by about 5.3 percentage points if national IQ increases by 10 points,” the researchers note.

To rule out the possibility that the link is caused by external factors, the researchers carried out robustness tests with various variables including the strength of IP enforcement, political factors, and economic development. However, even after these controls the link remained intact.

Luckily for copyright holders, ‘dumb’ countries are not ‘doomed’ by definition. If the ruling elite is smart enough, they can still lower piracy rates.

“[The results] should not be taken as universal evidence that society with higher intelligent quotient is a requirement to alleviate software piracy,” the researchers write.

“Our findings indicate that if ruling elite enforces policies to decrease software piracy, intelligence provides a credible proxy of the degree of consent of such policies.”

Interestingly, if these results hold up, with a bit of luck software piracy may solve itself in the long run.

Previous research found that software piracy increases literacy in African countries, which may in turn raise the national IQ, which will then lower piracy rates. Or… will that lower literacy again?

The full paper is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hollywood Withdraws Funding for UK Anti-Piracy Group FACT

Post Syndicated from Andy original https://torrentfreak.com/hollywood-withdraws-funding-for-uk-anti-piracy-group-fact-160524/

factThe Federation Against Copyright Theft (FACT)is the most aggressive private anti-piracy group currently operating in the UK.

In recent years the organization has been responsible for investigating dozens of alleged pirates and has secured many convictions, largely on behalf of its movie and TV industry partners.

Now, however, FACT faces a somewhat uncertain future after the Motion Picture Association, the movie industry outfit that supplies FACT with half of its funding, decided to pull its support for the anti-piracy group.

The MPA, which represents the interests of Disney, Paramount, Sony, 20th Century Fox, Universal and Warner Bros, has recently advised FACT that it intends to terminate its 30-year long relationship by not renewing its membership when it expires in six months’ time.

Speaking with Screen Daily, MPA Europe president Stan McCoy explained that local funding for FACT had been withdrawn in favor of financing larger regional hubs with a wider remit.

The relevant regional office dealing with the UK is the MPA’s EMEA (Europe, Middle East, Africa) in Brussels which aims to provide “a nimble local presence and a direct relationship with local law enforcement.”

McCoy acknowledged FACT’s efforts over the last three decades but said that the changing nature of piracy, including the shift away from physical to online infringement, requires “a more flexible approach” than the one currently in place.

“We live in a world now where a piracy website can have its nexus in Sweden one day, then move in a few months to Eastern Europe, then to Thailand, or it can operate in all three of those jurisdictions at once,” McCoy said.

For FACT the withdrawal of the MPA and by extension the major studios is a massive blow. The MPA currently provides FACT with around 50% of its funding, leaving the balance to made up a range of partners including the UK Cinema Association, the Film Distributors’ Association, the Premier League, and broadcasters including ITV.

FACT confirmed that its MPA funding is being withdrawn and is said to be considering its options. In the meantime, however, it’s unlikely that the UK will become a care-free piracy zone. The MPA says it intends to continue its work protecting copyright in the UK which will include the pursuit of more site-blocking injunctions and increased cooperation with the Police Intellectual Property Crime Unit.

That being said, it will be interesting to see how this situation plays out. FACT provided “boots on the ground” for the studios in the UK and undertook investigations against pirates that in some cases the police were reluctant to take on and in others carry through to a prosecution. Abandoning that local touch could be risky strategy for the MPA, but only time will tell.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Detecting Explosives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/detecting_explo.html

Really interesting article on the difficulties involved with explosive detection at airport security checkpoints.

Abstract: The mid-air bombing of a Somali passenger jet in February was a wake-up call for security agencies and those working in the field of explosive detection. It was also a reminder that terrorist groups from Yemen to Syria to East Africa continue to explore innovative ways to get bombs onto passenger jets by trying to beat detection systems or recruit insiders. The layered state-of-the-art detection systems that are now in place at most airports in the developed world make it very hard for terrorists to sneak bombs onto planes, but the international aviation sector remains vulnerable because many airports in the developing world either have not deployed these technologies or have not provided rigorous training for operators. Technologies and security measures will need to improve to stay one step ahead of innovative terrorists. Given the pattern of recent Islamic State attacks, there is a strong argument for extending state-of-the-art explosive detection systems beyond the aviation sector to locations such as sports arenas and music venues.

I disagree with his conclusions — the last sentence above — but the technical information on explosives detection technology is really interesting.