One of the more useful features of masscan is the “–banners” check, which connects to the TCP port, sends some request, and gets a basic response back. However, since masscan has it’s own TCP stack, it’ll interfere with the operating system’s TCP stack if they are sharing the same IPv4 address. The operating system will reply with a RST packet before the TCP connection can be established.
The way to fix this is to use the built-in packet-filtering firewall to block those packets in the operating-system TCP/IP stack. The masscan program still sees everything before the packet-filter, but the operating system can’t see anything after the packet-filter.
Note that we are talking about the “packet-filter” firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled “Firewall”, and it controls things based upon the application’s identity rather than by which ports it uses. This is normally “on” by default. The packet-filter is normally “off” by default and is of little use to normal users.
Also note that macOS changed packet-filters around version 10.10.5 (“Yosemite”, October 2014). The older one is known as “ipfw“, which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old “ipfw” command on the command line, you now use the “pfctl” command, as well as the “/etc/pf.conf” configuration file.
What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won’t reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won’t conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.
To figure out the range macOS uses, we run the following command:
On my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.
So this means I shouldn’t use source ports anywhere in the range 49152 to 65535. On my laptop, I’ve decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I’m using 1024 (two to the tenth power).
To configure masscan, I can either type the parameter “–source-port 40000-41023” every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don’t have to keep retyping them on the command line.
source-port = 40000-41023
Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:
block in proto tcp from any to any port 40000 >< 41024
However, we aren’t done yet. By default, the packet-filter firewall is off on some versions of macOS. Therefore, every time you reboot your computer, you need to enable it. The simple way to do this is on the command line run:
pfctl -e
Or, if that doesn’t work, try:
pfctl -E
If the firewall is already running, then you’ll need to load the file explicitly (or reboot):
Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.
It’s a legitimate fear, and perhaps a prudent action. But it’s just one instance of the much larger issue of securing our supply chains.
All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.
In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add “backdoors” to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It’s relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.
This isn’t the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: “Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”
Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country’s tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn’t trust another country, then it can’t trust that country’s computer products.
But this trust isn’t limited to the country where the company is based. We have to trust the country where the software is written — and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.
We also have to trust the programmers. Today’s large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries’ citizens are writing software for Apple or Microsoft or Google.
We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.
In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.
I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.
We don’t know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don’t know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It’s doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can’t buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we’re largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.
Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn’t get too far out of control.
Директивата има за цел да предвиди координирането на националните правила относно достъпа до дейността по управление на авторското право и сродните му права от организации за колективно управление на авторски права, особеностите на тяхното управление и надзорната рамка (съобр.8), да определи изискванията, приложими за организациите за колективно управление на авторски права, за да се гарантира висок стандарт на управление, финансово управление, прозрачност и отчетност (съобр.9), а също да да се осигури необходимото минимално качество на трансграничните услуги, предоставяни от организации за колективно управление на авторски права, особено по отношение на прозрачността на представяния репертоар и точността на финансовите потоци, свързани с използването на правата, и да се улесни предоставяне на многотериториална многорепертоарна услуга (съобр.40).
Измененията са съществени и пространни, вероятно скоро ще има подробни анализи.
И нещо от ПЗР на ЗИД: § 27. В 6-месечен срок от влизането в сила на този закон министърът на културата представя на Европейската комисия доклад за състоянието и развитието на многотериториалното отстъпване на права в интернет на територията на Република България. Докладът съдържа информация за наличието на многотериториални разрешения, за спазването на глава единадесета „и“ от организациите за колективно управление на права, както и оценка на развитието на многотериториалното разрешаване на използването в интернет на музикални произведения от ползватели, носители на права и от други заинтересовани лица. § 28. Министърът на културата предоставя на Европейската комисия списък на регистрираните организации за колективно управление на права и я уведомява за промените в него в тримесечен срок от настъпването им. § 29. В Закона за електронните съобщения се правят следните допълнения: 1. В чл. 73, ал. 3 се създава т. 15: „15. задължения за предоставяне на вярна и точна информация за броя на потребителите и абонатите на предприятията, предоставящи електронни съобщителни мрежи и/или услуги.“ 2. В чл. 231, ал. 2 накрая се добавя „договорите с доставчиците на съдържание, както и актуална база данни за абонатите, при спазване на изискванията за защита на личните данни“. § 30. В Закона за закрила и развитие накултурата в чл. 31, ал. 1, т. 1 след думите „ал. 2“ се добавя „и чл. 98в1, ал. 6“. § 31. Законът влиза в сила от деня на обнародването му в „Държавен вестник“, с изключение на § 18 и 30, които влизат в сила 9 месеца след обнародването му.”
This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect
AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.
Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.
In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.
Scenario
For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.
Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.
Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.
Prerequisites
To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:
cloudformation:*
lambda:*
codedeploy:*
iam:create*
You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.
For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments. The deployment is handled and automated by CloudFormation.
SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.
Create a SAM template
To get started, write your SAM template and call it template.yaml.
Review the key parts of the SAM template that defines returnS3Buckets:
The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';
const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();
exports.handler = (event, context, callback) => {
console.log("Entering PreTraffic Hook!");
// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
var deploymentId = event.DeploymentId;
var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;
var functionToTest = process.env.NewVersion;
console.log("Testing new function version: " + functionToTest);
// Perform validation of the newly deployed Lambda version
var lambdaParams = {
FunctionName: functionToTest,
InvocationType: "RequestResponse"
};
var lambdaResult = "Failed";
lambda.invoke(lambdaParams, function(err, data) {
if (err){ // an error occurred
console.log(err, err.stack);
lambdaResult = "Failed";
}
else{ // successful response
var result = JSON.parse(data.Payload);
console.log("Result: " + JSON.stringify(result));
// Check the response for valid results
// The response will be a JSON payload with statusCode and body properties. ie:
// {
// "statusCode": 200,
// "body": 51
// }
if(result.body == 9){
lambdaResult = "Succeeded";
console.log ("Validation testing succeeded!");
}
else{
lambdaResult = "Failed";
console.log ("Validation testing failed!");
}
// Complete the PreTraffic Hook by sending CodeDeploy the validation status
var params = {
deploymentId: deploymentId,
lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
status: lambdaResult // status can be 'Succeeded' or 'Failed'
};
// Pass AWS CodeDeploy the prepared validation test results.
codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
if (err) {
// Validation failed.
console.log('CodeDeploy Status update failed');
console.log(err, err.stack);
callback("CodeDeploy Status update failed");
} else {
// Validation succeeded.
console.log('Codedeploy status updated successfully');
callback(null, 'Codedeploy status updated successfully');
}
});
}
});
}
The hook is hardcoded to check that the number of S3 buckets returned is 9.
Review the key parts of the SAM template that defines preTrafficHook:
The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID]. Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
Set the Timeout attribute to allow enough time to complete your validation tests.
Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.
Deploy the function
Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.
First, package the function. This command returns a CloudFormation importable file, packaged.yaml.
sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
Now deploy everything:
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM
At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:
SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:
There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.
An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.
Publish a new Lambda function version
Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):
'use strict';
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
exports.handler = (event, context, callback) => {
console.log("I am here! " + context.functionName + ":" + context.functionVersion);
s3.listBuckets(function (err, data){
if(err){
console.log(err, err.stack);
callback(null, {
statusCode: 500,
body: "Failed!"
});
}
else{
var allBuckets = data.Buckets;
console.log("Total buckets: " + allBuckets.length);
//callback(null, allBuckets.length);
// New Code begins here
var counter=0;
for(var i in allBuckets){
if(allBuckets[i].Name[0] === "a")
counter++;
}
console.log("Total buckets starting with a: " + counter);
callback(null, {
statusCode: 200,
body: counter
});
}
});
}
Repackage and redeploy with the same two commands as earlier:
sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM
CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:
During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.
The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:
The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):
Verify the traffic shift
During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.
A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:
After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.
Check the results
If you invoke the function alias manually, you see the results of the new implementation.
aws lambda invoke –function [lambda arn to live alias] out.txt
You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:
Summary
This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.
Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.
The rhashtable data structure is a generic resizable hash-table implementation in the Linux kernel, which LWN first introduced as “relativistic hash tables” back in 2014. I thought at the time that it might be fun to make use of rhashtables, but didn’t, until an opportunity arose through my work on the Lustre filesystem. Lustre is a cluster filesystem that is currently in drivers/staging while the code is revised to meet upstream requirements. One of those requirements is to avoid duplicating similar functionality where possible. As Lustre contains a resizable hash table, it really needs to be converted to use rhashtables instead — at last I have my opportunity.
Subscribers can read on for a look at the rhashtable API by guest author Neil Brown.
Вчера се обяви политическото обединение Демократична България – с Да, България, ДСБ и Зелените. И като част от екипа, обявен като алтернатива на управлението, ще си позволя да напиша първия ми фокусирано партиен блогпост от изборите миналата година досега.
Съгласих се да стана част от екипа на управленската алтернатива защото не обичам да се „скатавам“. Да, мога да измисля достатъчно оправдания защо да не участвам, а и публичността носи рискове, но оправдания всеки има. Дигиталната трансформация, и електронното управление като част от нея, са нещо, с което смятам, че мога да помогна. И също така – нещо, което е крайно належащо, ако не искаме да изоставаме като държава в дългосрочен план.
Но няма да се фокусирам върху моята роля, няма да влизам в патетични слова за светлото бъдеще, за топлите отношения в новото обединение, за грандиозните ни резултати на следващите избори и т.н. Вместо това ще обобщя коментарите на хората (из социалните мрежи и новинарските сайтове) и ще опитам да дам друга перспектвиа.
У доста хора има осезаем негативизъм към това обединение. Всеки със своята причина, с която не мога да споря. но този негативизъм кулминира в крайно множество твърдения за обединението и за партиите в него, които твърдения мога да опитам да оспоря. И целта не е да кажа „аха! не сте прави да не ни харесвате“ (защото това е субективно и всеки има право да не харесва каквото си иска), а по-скоро да допълня картината, която всеки има за политическия пейзаж. Ще разгледам 10 твърдения/коментара, които са преобладаващи. И не за да влизам в „обяснителен режим“, а за да вляза в своеобразен диалог с по-скептичните.
Обединяватe се само за да минете 4-процентовата бариера
Едно от следствията на такова обединение ще е влизане в парламента. Но целта на обединението не е това. Целта е хората с демократични виждания за България (а те всъщност не са малко) да има за кого да гласуват без да се чудят дали не помагат на БСП, ДПС, Кремъл или който и да е друг. Целта е дългосрочна. И предвид обявяването на конкретен управленски екип, целта е управление. И то адекватно, съвременно и експертно, а не махленско.
Има и друг аспект – откакто се учреди Да, България имаме един „стъклен таван“, изразяващ се в „абе, харесваме ви, ама ще влезете ли?“ (чувал съм го на няколко срещи с потенциални избиратели). Та, надяваме се с това обединение да счупим този стъклен таван, т.е. тези, които биха ни подкрепили, да го направят. Социолозите да кажат.
Късно е либе за китка / защо чак сега
…или продължаващото вече година обвинение срещу Да, България, че „е разцепила дясното“. Както имах случай да спомена днес – предишните избори не бяха успех, но решението за явяване като „Да, България“ все пак беше правилно. По много причини, които тук само ще маркирам – след изборите направих един анализ на екзит половете и от него излезе, че макар да не е постигнала желания резултата, Да, България е привлякла най-голям процент нови избиратели и гласуващи за първи път. Взела е повече от ГЕРБ, отколкото от РБ(2014). В „дългата нива“, която имаме да орем, това всичко е от значение. Дали от по-голямо значение от едно евентуално влизане в парламента, не знам. Но в момента имаме Да, България със собствено лице и послания, която влиза в продуктивен диалог и обединения. Която привлича нови хора в местни структури из страната. Да, България на 2 седмици, влизаща в „поредното дясно обединение“ (както несъмнено щеше да бъде наречена евентуална коалиция тогава), не е ясно дали щеше да има същата съдба. Наскоро някой ми каза, че е приятно изненадан, че не сме умрели след изборите. И да, не сме.
Оставете жълтите павета и ходете из страната
Съвсем правилно. Това и правим. За една година учредихме десетки местни структури – в областни градове, но и в по-малки населени места. Съвсем наясно сме, че избори не се печелят във фейсбук и с разходки между Кърниградска и Драган Цанков (адресите на централите на ДСБ и Да, България).
Леви сте, не сте десни!
Това „обвинение“ към Да, България тръгна от самото начало без още да имаме програма. После продължи въпреки програмата и въпреки десетките позиции, с които излизахме. Може би идва от първоначалното определяне като „нито ляво, нито дясно“, не знам. И макар да смятам лявото и дясното за изпразнени от съдържание в България, все пак трябва да подчертая, че Да, България е центристка партия, стояща все пак от дясната страна на центъра. По много причини – от политическото позициониране на самите членове, през предизборната програма, до десетките позиции по различни теми, в които неизменно присъстват защитата на частната собственост, частната инициатива, предприемачеството ненамесата на държавата в личния живот на хората и др. десни принципи и ценности. Да, сред позициите има и някои, които могат да се определят като по-леви (сещам се за 1-2 примера), но именно затова сме в центъра. Може би объркването на мястото в спектъра идва заради противопоставянето на либералното и консервативното. И тъй като в САЩ лявото и либерално, а дясното – консервативно, някой може да реши, че това непременно винаги е така. Ами не е. Не, не сме леви. И да, по-скоро либерални, отколкото консервативни сме (но това не значи, че нямаме консервативно-мислещи хора).
Ама Зелените са леви!
Исторически, зелените партии наистина са в лявата част на спектъра. Защитата на природата от човешката дейност е неизменно в конфликт с неограничената свобода на бизнеса. Но конкретни нашите Зелени по-скоро клонят към центъра. Съпредседател в момента им в Владислав Панев, финансист с доста дясно мислене. В органите на зелените има както по-ляво, така и по-дясно мислещи хора. А в настоящия момент, „зелено“ може и да не значи „ляво“ – устойчивото развитие е добро и за бизнеса и за природата, просто е различен поглед. Един пример от днес – министър Нено Димов, който е считан за много десен (идвайки от Институт за дясна политика), предлага зелени мерки във връзка с автомобилите. А и дори зелените да са по-вляво от центъра, съгласили сме се да сме на различни мнения по някои теми. В България трима души не могат да са на едно мнение по всички теми, камо ли три партии. Имаме общи цели, а всяка партия си има собствено лице.
Поредното механично обединение без смисъл
…или „с какво е по-различно от Синята коалиция и РБ“. Без това да се разглежда като критика към предните десни обединения, това не идва в предизборен период. Не сядаме на една маса заради идващи избори, а с по-дългосрочна перспектива. Освен това то идва след една година общи позиции и действия по редица въпроси, както на национално ниво, така и по места. Така че обединението не е механично – със сигурност и с ДСБ и със Зелените имаме общи ценности за това какво е правова държава.
Сорос, Америка за България и Прокопиев ви финансират!
Тук мога да дам напълно верния отговор, че това са пълни глупости. Партията не е получавала нито лев – нито от Сорос, нито от Америка за България, нито от Прокопиев. Има си списък с дарители, сметната палата го има, отчетите са ни публични. Пари общо взето почти нямаме. Ако Сорос е искал да превежда, объркал е IBAN-а. Мога да вляза и в подробности, също така. Да, някои хора от ръководството са били част от НПО-та, които са получавали грантово финансиране. Но НПО-тата не са лоши, а грантовото финансиране не е пари на калпак. В момента, доколкото ми е известно, няколко члена на Да, България участват в НПО, което е бенефициент на Америка за България. Но именно това беше причината те да нямат желание да бъдат в органите на партията, за да няма погрешни тълкувания. Е, това не спира Блиц и ПИК да обясняват за грантовете.
Защо не говорите за (проблем Х, който за мен е важен)
Говорим по много проблеми, имаме много позиции. Но най-вероятно това не стига до вас, отчасти заради наши грешки, отчасти защото извън няколко онлайн издания, отразяването ни по медиите е силно ограничено (канят всякакви алкохолици в националните медии, но отменят участие на Христо Иванов, например). Ясно е, че това е средата, в която работим и трябва да се съобразяваме с нея. Което включва фокусиране в няколко по-ключови теми, а не разпиляването в много теми, така че много хора да припознаят своя проблем.
Само говорите, а нищо не правите
Всъщност правим, но отново голяма част от активността не достига до широк кръг хора. Повечето конкретни действия са на местно ниво, но и на национално не са само приказки – организирали сме акции, изпращали сме становища и предложения за изменения на законопроекти до Народното събрание. И да, говорили сме по темите, които са важни и сме посочвали проблеми. Защото политическото говорене все пак е „правене на нещо“.
Със стари к*рви, нов бардак!
Кой е стара к*рва бе?? А сериозно – в лицата, обявени днес има някои познати (с по един-два мандата в парламента, например), и доста непознати на широката публика. Така че това е по-скоро клише по инерция, отколкото базирано на някакви обективни факти. Естествено, че трябва да има хора с опит и естествено, че има много нови лица.
В обобщение – нормално и правилно е избирателите да са взискателни. И при опита който имаме с разпадащи се десни обединения е нормално да са скептични. Надявам се да съм разсеял поне малко от този скептицизъм. Но той е и полезен. Нямаме претенции за безгрешност и святост и всяка критика простираща се отвън „я се разкарайте“ няма да бъде подмината.
At AWS, our customers have always been the motivation for our innovation. In turn, we’re committed to helping them accelerate the pace of their own innovation. It was in the spirit of helping our customers achieve their objectives faster that we launched AWS Lambda in 2014, eliminating the burden of server management and enabling AWS developers to focus on business logic instead of the challenges of provisioning and managing infrastructure.
In the years since, our customers have built amazing things using Lambda and other serverless offerings, such as Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. Together, these services make it easy to build entire applications without the need to provision, manage, monitor, or patch servers. By removing much of the operational drudgery of infrastructure management, we’ve helped our customers become more agile and achieve faster time-to-market for their applications and services. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.
After we launched Lambda, though, we quickly learned an important lesson: A single Lambda function rarely exists in isolation. Rather, many functions are part of serverless applications that collectively deliver customer value. Whether it’s the combination of event sources and event handlers, as serverless web apps that combine APIs with functions for dynamic content with static content repositories, or collections of functions that together provide a microservice architecture, our customers were building and delivering serverless architectures for every conceivable problem. Despite the economic and agility benefits that hundreds of thousands of AWS customers were enjoying with Lambda, we realized there was still more we could do.
How Customer Feedback Inspired Us to Innovate
We heard from our customers that getting started—either from scratch or when augmenting their implementation with new techniques or technologies—remained a challenge. When we looked for serverless assets to share, we found stellar examples built by serverless pioneers that represented a multitude of solutions across industries.
There were apps to facilitate monitoring and logging, to process image and audio files, to create Alexa skills, and to integrate with notification and location services. These apps ranged from “getting started” examples to complete, ready-to-run assets. What was missing, however, was a unified place for customers to discover this diversity of serverless applications and a step-by-step interface to help them configure and deploy them.
We also heard from customers and partners that building their own ecosystems—ecosystems increasingly composed of functions, APIs, and serverless applications—remained a challenge. They wanted a simple way to share samples, create extensibility, and grow consumer relationships on top of serverless approaches.
We built the AWS Serverless Application Repository to help solve both of these challenges by offering publishers and consumers of serverless apps a simple, fast, and effective way to share applications and grow user communities around them. Now, developers can easily learn how to apply serverless approaches to their implementation and business challenges by discovering, customizing, and deploying serverless applications directly from the Serverless Application Repository. They can also find libraries, components, patterns, and best practices that augment their existing knowledge, helping them bring services and applications to market faster than ever before.
How the AWS Serverless Application Repository Inspires Innovation for All Customers
Companies that want to create ecosystems, share samples, deliver extensibility and customization options, and complement their existing SaaS services use the Serverless Application Repository as a distribution channel, producing apps that can be easily discovered and consumed by their customers. AWS partners like HERE have introduced their location and transit services to thousands of companies and developers. Partners like Datadog, Splunk, and TensorIoT have showcased monitoring, logging, and IoT applications to the serverless community.
Individual developers are also publishing serverless applications that push the boundaries of innovation—some have published applications that leverage machine learning to predict the quality of wine while others have published applications that monitor crypto-currencies, instantly build beautiful image galleries, or create fast and simple surveys. All of these publishers are using serverless apps, and the Serverless Application Repository, as the easiest way to share what they’ve built. Best of all, their customers and fellow community members can find and deploy these applications with just a few clicks in the Lambda console. Apps in the Serverless Application Repository are free of charge, making it easy to explore new solutions or learn new technologies.
Finally, we at AWS continue to publish apps for the community to use. From apps that leverage Amazon Cognito to sync user data across applications to our latest collection of serverless apps that enable users to quickly execute common financial calculations, we’re constantly looking for opportunities to contribute to community growth and innovation.
At AWS, we’re more excited than ever by the growing adoption of serverless architectures and the innovation that services like AWS Lambda make possible. Helping our customers create and deliver new ideas drives us to keep inventing ways to make building and sharing serverless apps even easier. As the number of applications in the Serverless Application Repository grows, so too will the innovation that it fuels for both the owners and the consumers of those apps. With the general availability of the Serverless Application Repository, our customers become more than the engine of our innovation—they become the engine of innovation for one another.
To browse, discover, deploy, and publish serverless apps in minutes, visit the Serverless Application Repository. Go serverless—and go innovate!
Dr. Tim Wagner is the General Manager of AWS Lambda and Amazon API Gateway.
You can enable continuous backups with a single click in the AWS Management Console, a simple API call, or with the AWS Command Line Interface (CLI). DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict. You can still keep your on-demand backups for as long as needed for archival purposes but PITR works as additional insurance against accidental loss of data. Let’s see how this works.
Continuous Backup
To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call. After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date
Let’s imagine a scenario where I have a lot of old user profiles that I want to delete.
I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while.
import boto3
table = boto3.resource("dynamodb").Table("VerySuperImportantTable")
items = table.scan(
FilterExpression="last_update >= :date",
ExpressionAttributeValues={":date": "2014-01-01T00:00:00"},
ProjectionExpression="ImportantId"
)['Items']
print("Deleting {} Items! Dangerous.".format(len(items)))
with table.batch_writer() as batch:
for item in items:
batch.delete_item(Key=item)
Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command).
Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00"))
Restoring
Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time.
I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to.
For a relatively small and evenly distributed table like mine, the restore is quite fast.
The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores.
Learn More & Try It Yourself There’s plenty more to learn about this new feature in the documentation here.
Pricing for continuous backups varies by region and is based on the current size of the table and all indexes.
A few things to note:
PITR works with encrypted tables.
If you disable PITR and later reenable it, you reset the start time from which you can recover.
Just like on-demand backups, there are no performance or availability impacts to enabling this feature.
Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table.
Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail.
Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments. – Randall
Стана известно решението от 22 март 2018 по делото T-540/15 De Capitani v European Parliament. Делото е с предмет искане на основание член 263 ДФЕС за отмяна на решение A(2015) 4931 на Европейския парламент, с което на жалбоподателя е отказан пълен достъп до документи – в частност бързи споразумения относно текущите обикновени законодателни процедури, предложени във всички комитети, таблици с няколко колони (описващи предложенията на Европейската комисия, ориентацията на парламентарната комисия, измененията, предложени от вътрешните органи на Съвета на Европейския съюз, и ако има такива, предложените компромисни проекти), представени на участниците в триалозите за текущите обикновени законодателни процедури.
Парламентът отговаря на жалбоподателя, че поради много големия брой документи, посочени в първоначалното заявление, искането трябвало да се отхвърли.Де Капитани свежда искането до седем четириколонни таблици, изготвяни при текущите към датата на първоначалното заявление триалози.
Така че делото се отнася до публичността на четириколонните таблици за целите на триалозите – обявени впрочем отдавна за една от най-непрозрачните процедури в правотворчеството.
.
Парламентът отказва информация със следните мотиви:
– процесът на вземане на решения щял да бъде реално, конкретно и сериозно засегнат от оповестяването на четвъртата колона на спорните документи,
– областта на полицейското сътрудничество, към която спадат спорните документи, била много чувствителна и оповестяването на тяхната четвърта колона щяло да навреди на доверието между държавите членки и между институциите на Европейския съюз, а поради това и на доброто сътрудничество между тях, както и на вътрешния процес на вземане на решения на Парламента,
– оповестяване в момента, в който все още се водят преговори, вероятно щяло да доведе до опасност от упражняване на обществен натиск върху докладчика, върху докладчиците в сянка и политическите групи, тъй като преговорите са относно много чувствителните въпроси за защита на данните и за управителния съвет на Агенцията на Европейския съюз за сътрудничество и обучение в областта на правоприлагането (Европол),
– предоставянето на достъп до четвъртата колона на спорните документи щяло да предизвика колебания у председателството на Съвета за споделянето на информация и сътрудничество с преговарящия екип на Парламента, а именно с докладчика; освен това поради нарасналия натиск от страна на националните органи и на групите по интереси екипът щял да бъде принуден да направи преждевременно стратегически избор, изразяващ се в определяне кога да се отстъпи на Съвета и кога да се иска повече от неговото председателство, което „щяло ужасно да усложни възможностите за постигане на съгласие на обща основа“,
– оказвало се, че принципът, че „нищо не е договорено, докато не бъде договорено всичко“ е особено важен за доброто функциониране на законодателната процедура и поради това оповестяването преди края на преговорите на даден елемент, дори да не е чувствителен по същността си, би могло да има отрицателни последици за всички други аспекти на дадена преписка, нещо повече, имало опасност оповестяването на становища, които още не са станали окончателни, да създаде неточна представа за истинските становища на институциите,
– поради това трябвало да се откаже достъп до цялата четвърта колона, до одобряването на текста, предмет на споразумение, от съзаконодателите.
“Що се отнася до наличието на евентуален по-висш обществен интерес, Парламентът заявява, че сами по себе си принципът на прозрачност и високите изисквания на демокрацията не са и не биха могли да бъдат по-висш обществен интерес.”[8]
Общият съд смята, че е необходимо да припомни съдебната практика във връзка с тълкуването на Регламент № 1049/2001, след това основните характеристики на триалозите, и на трето място, дали трябва да утвърди наличието на обща презумпция, по силата на която съответната институция може да откаже достъпа до четвъртата колона от таблиците от текущите триалози. Накрая, ако Общият съд стигне до извода, че такава презумпция не съществува, той ще изследва дали пълното оповестяване на спорните документи ще засегне сериозно въпросния процес на вземане на решения по смисъла на член 4, параграф 3, първа алинея от Регламент № 1049/2001.
Общият съд отбелязва, че в резолюцията си от 11 март 2014 г. за достъпа на обществеността до документите Парламентът отправя покана към Комисията, Съвета и към себе си „да гарантират по принцип по-голяма прозрачност на неофициалните триалози, като организират открити срещи, като публикуват документите си, включително графиците, дневния ред, протоколите, разгледаните документи, приетите решения, информацията относно делегациите на държавите членки, както и техните позиции и протоколи, в уеднаквен и лесно достъпен в интернет формат, при спазване на изключенията, изброени в член 4, параграф 1 от Регламент № 1049/2001“.
С оглед на всичко изложено по-горе, нито един от изложените от Парламента мотиви, поотделно или взети заедно, не показва, че пълният достъп до спорните документи е могъл да засегне конкретно и реално, разумно предвидимо, а не чисто хипотетично, разглеждания процес на вземане на решения по смисъла на член 4, параграф 3, първа алинея от Регламент № 1049/2001.
Като е отказал с обжалваното решение оповестяването в хода на процедурата на четвъртата колона от спорните документи, с мотива че то би довело до сериозно засягане на неговия процес на вземане на решения, Парламентът е нарушил член 4, параграф 3, първа алинея от Регламент № 1049/2001.
For Joel Wagener, Director of IT at AIBS, simplicity is an important feature he looks for in software applications to use in his organization. So maybe it’s not unexpected that Joel chose Backblaze for Business to back up AIBS’s staff computers. According to Joel, “It just works.”
AIBS (The American Institute of Biological Sciences) is a non-profit scientific association dedicated to advancing biological research and education. Founded in 1947 as part of the National Academy of Sciences, AIBS later became independent and now has over 100 member organizations. AIBS works to ensure that the public, legislators, funders, and the community of biologists have access to and use information that will guide them in making informed decisions about matters that require biological knowledge.
AIBS started using Backblaze for Business Cloud Backup several years ago to make sure that the organization’s data was backed up and protected from accidental loss or computer failure. AIBS is based in Washington, D.C., but is a virtual organization, with staff dispersed around the United States. AIBS needed a backup solution that worked anywhere a staff member was located, and was easy to use, as well. Joel has made Backblaze a default part of the configuration management for all the AIBS endpoints, which in their case are exclusively Macintosh.
“We started using Backblaze on a single computer in 2014, then not too long after that decided to deploy it to all our endpoints,” explains Joel. “We use Groups to oversee backups and for central billing, but we let each user manage their own computer and restore files on their own if they need to.”
“Backblaze stays out of the way until we need it. It’s fairly lightweight, and I appreciate that it’s simple,” says Joel. “It doesn’t throttle backups and the price point is good. I have family members who use Backblaze, as well.”
Backblaze’s Groups feature permits an organization to oversee and manage the user accounts, including restores, or let users handle that themselves. This flexibility fits a variety of organizations, where various degrees of oversight or independence are desirable. The finance and HR departments could manage their own data, for example, while the rest of the organization could be managed by IT. All groups can be billed centrally no matter how other functionality is set up.
“If we have a computer that needs repair, we can put a loaner computer in that person’s hands and they can immediately get the data they need directly from the Backblaze cloud backup, which is really helpful. When we get the original computer back from repair we can do a complete restore and return it to the user all ready to go again. When we’ve needed restores, Backblaze has been reliable.”
Joel also likes that the memory footprint of Backblaze is light — the clients for both Macintosh and Windows are native, and designed to use minimum system resources and not impact any applications used on the computer. He also likes that updates to the client software are pushed out when necessary.
Backblaze for Business also helps IT maintain archives of users’ computers after they leave the organization.
“We like that we have a ready-made archive of a computer when someone leaves,” said Joel. The Backblaze backup is there if we need to retrieve anything that person was working on.”
There are other capabilities in Backblaze that Joel likes, but hasn’t had a chance to use yet.
“We’ve used Casper (Jamf) to deploy and manage software on endpoints without needing any interaction from the user. We haven’t used it yet for Backblaze, but we know that Backblaze supports it. It’s a handy feature to have.”
”It just works.” — Joel Wagener, AIBS Director of IT
Perhaps the best thing about Backblaze for Business isn’t a specific feature that can be found on a product data sheet.
“When files have been lost, Backblaze has provided us access to multiple prior versions, and this feature has been important and worked well several times,” says Joel.
“That provides needed peace of mind to our users, and our IT department, as well.”
Data that describe processes in a spatial context are everywhere in our day-to-day lives and they dominate big data problems. Map data, for instance, whether describing networks of roads or remote sensing data from satellites, get us where we need to go. Atmospheric data from simulations and sensors underlie our weather forecasts and climate models. Devices and sensors with GPS can provide a spatial context to nearly all mobile data.
In this post, we introduce the WIND toolkit, a huge (500 TB), open weather model dataset that’s available to the world on Amazon’s cloud services. We walk through how to access this data and some of the open-source software developed to make it easily accessible. Our solution considers a subset of geospatial data that exist on a grid (raster) and explores ways to provide access to large-scale raster data from weather models. The solution uses foundational AWS services and the Hierarchical Data Format (HDF), a well adopted format for scientific data.
The approach developed here can be extended to any data that fit in an HDF5 file, which can describe sparse and dense vectors and matrices of arbitrary dimensions. This format is already popular within the physical sciences for both experimental and simulation data. We discuss solutions to gridded data storage for a massive dataset of public weather model outputs called the Wind Integration National Dataset (WIND) toolkit. We also highlight strategies that are general to other large geospatial data management problems.
Wind Integration National Dataset
As variable renewable power penetration levels increase in power systems worldwide, the importance of renewable integration studies to ensure continued economic and reliable operation of the power grid is also increasing. The WIND toolkit is the largest freely available grid integration dataset to date.
The WIND toolkit was developed by 3TIER by Vaisala. They were under a subcontract to the National Renewable Energy Laboratory (NREL) to support studies on integration of wind energy into the existing US grid. NREL is a part of a network of national laboratories for the US Department of Energy and has a mission to advance the science and engineering of energy efficiency, sustainable transportation, and renewable power technologies.
The toolkit has been used by consultants, research groups, and universities worldwide to support grid integration studies. Less traditional uses also include resource assessments for wind plants (such as those powering Amazon data centers), and studying the effects of weather on California condor migrations in the Baja peninsula.
The diversity of applications highlights the value of accessible, open public data. Yet, there’s a catch: the dataset is huge. The WIND toolkit provides simulated atmospheric (weather) data at a two-km spatial resolution and five-minute temporal resolution at multiple heights for seven years. The entire dataset is half a petabyte (500 TB) in size and is stored in the NREL High Performance Computing data center in Golden, Colorado. Making this dataset publicly available easily and in a cost-effective manner is a major challenge.
As other laboratories and public institutions work to release their data to the world, they may face similar challenges to those that we experienced. Some prior, well-intentioned efforts to release huge datasets as-is have resulted in data resources that are technically available but fundamentally unusable. They may be stored in an unintuitive format or indexed and organized to support only a subset of potential uses. Downloading hundreds of terabytes of data is often impractical. Most users don’t have access to a big data cluster (or super computer) to slice and dice the data as they need after it’s downloaded.
We aim to provide a large amount of data (50 terabytes) to the public in a way that is efficient, scalable, and easy to use. In many cases, researchers can access these huge cloud-located datasets using the same software and algorithms they have developed for smaller datasets stored locally. Only the pieces of data they need for their individual analysis must be downloaded. To make this work in practice, we worked with the HDF Group and have built upon their forthcoming Highly Scalable Data Service.
In the rest of this post, we discuss how the HSDS software was developed to use Amazon EC2 and Amazon S3 resources to provide convenient and scalable access to these huge geospatial datasets. We describe how the HSDS service has been put to work for the WIND Toolkit dataset and demonstrate how to access it using the h5pyd Python library and the REST API. We conclude with information about our ongoing work to release more ‘open’ datasets to the public using AWS services, and ways to improve and extend the HSDS with newer Amazon services like Amazon ECS and AWS Lambda.
Developing a scalable service for big geospatial data
The HDF5 file format and API have been used for many years and is an effective means of storing large scientific datasets. For example, NASA’s Earth Observing System (EOS) satellites collect more than 16 TBs of data per day using HDF5.
With the rise of the cloud, there are new challenges and opportunities to rethink how HDF5 can be enhanced to work effectively as a component in a cloud-native architecture. For the HDF Group, working with NREL has been a great opportunity to put ideas into practice with a production-size dataset.
An HDF5 file consists of a directed graph of group and dataset objects. Datasets can be thought of as a multidimensional array with support for user-defined metadata tags and compression. Typical operations on datasets would be reading or writing data to a regular subregion (a hyperslab) or reading and writing individual elements (a point selection). Also, group and dataset objects may each contain an arbitrary number of the user-defined metadata elements known as attributes.
Many people have used the HDF library in applications developed or ported to run on EC2 instances, but there are a number of constraints that often prove problematic:
The HDF5 library can’t read directly from HDF5 files stored as S3 objects. The entire file (often many GB in size) would need to be copied to local storage before the first byte can be read. Also, the instance must be configured with the appropriately sized EBS volume)
The HDF library only has access to the computational resources of the instance itself (as opposed to a cluster of instances), so many operations are bottlenecked by the library.
Any modifications to the HDF5 file would somehow have to be synchronized with changes that other instances have made to same file before writing back to S3.
Using a pattern common to many offerings from AWS, the solution to these constraints is to develop a service framework around the HDF data model. Using this model, the HDF Group has created the Highly Scalable Data Service (HSDS) that provides all the functionality that traditionally was provided by the HDF5 library. By using the service, you don’t need to manage your own file volumes, but can just read and write whatever data that you need.
Because the service manages the actual data persistence to a durable medium (S3, in this case), you don’t need to worry about disk management. Simply stream the data you need from the service as you need it. Secondly, putting the functionality behind a service allows some tricks to increase performance (described in more detail later). And lastly, HSDS allows any number of clients to access the data at the same time, enabling HDF5 to be used as a coordination mechanism for multiple readers and writers.
In designing the HSDS architecture, we gave much thought to how to achieve scalability of the HSDS service. For accessing HDF5 data, there are two different types of scaling to consider:
Multiple clients making many requests to the service
Single requests that require a significant amount of data processing
To deal with the first scaling challenge, as with most services, we considered how the service responds as the request rate increases. AWS provides some great tools that help in this regard:
Auto Scaling groups
Elastic Load Balancing load balancers
The ability of S3 to handle large aggregate throughput rates
By using a cluster of EC2 instances behind a load balancer, you can handle different client loads in a cost-effective manner.
The second scaling challenge concerns single requests that would take significant processing time with just one compute node. One example of this from the WIND toolkit would be extracting all the values in the seven-year time span for a given geographic point and dataset.
In HDF5, large datasets are typically stored as “chunks”; that is, a regular partition of the array. In HSDS, each chunk is stored as a binary object in S3. The sequential approach to retrieving the time series values would be for the service to read each chunk needed from S3, extract the needed elements, and go on to the next chunk. In this case, that would involve processing 2557 chunks, and would be quite slow.
Fortunately, with HSDS, you can speed this up quite a bit by exploiting the compute and I/O capabilities of the cluster. Upon receiving the request, the receiving node can use other nodes in the cluster to read different portions of the selection. With multiple nodes reading from S3 in parallel, performance improves as the cluster size increases.
The diagram below illustrates how this works in simplified case of four chunks and four nodes.
This architecture has worked in well in practice. In testing with the WIND toolkit and time series extraction, we observed a request latency of ~60 seconds using four nodes vs. ~5 seconds with 40 nodes. Performance roughly scales with the size of the cluster.
A planned enhancement to this is to use AWS Lambda for the worker processing. This enables 1000-way parallel reads at a reasonable cost, as you only pay for the milliseconds of CPU time used with AWS Lambda.
Public access to atmospheric data using HSDS and AWS
An early challenge in releasing the WIND toolkit data was in deciding how to subset the data for different use cases. In general, few researchers need access to the entire 0.5 PB of data and a great deal of efficiency and cost reduction can be gained by making directed constituent datasets.
NREL grid integration researchers initially extracted a 2-TB subset by selecting 120,000 points where the wind resource seemed appropriate for development. They also chose only those data important for wind applications (100-m wind speed, converted to power), the most interesting locations for those performing grid studies. To support the remaining users who needed more data resolution, we down-sampled the data to a 60-minute temporal resolution, keeping all the other variables and spatial resolution intact. This reduced dataset is 50 TB of data describing 30+ atmospheric variables of data for 7 years at a 60-minute temporal resolution.
The WindViz browser-based Gridded Wind Toolkit Visualizer was created as an example implementation of the HSDS REST API in JavaScript. The visualizer is written in the style of ECMAScript 2016 using a modern development toolchain that includes webpack and Babel. The source code is available through our GitHub repository. The demo page is hosted via GitHub pages, and we use a cross-origin AJAX request to fetch data from the HSDS service running on the EC2 infrastructure. The visualizer can be used to explore the gridded wind toolkit data on a map. Achieve full spatial resolution by zooming in to a specific region.
Programmatic access is possible using the h5pyd Python library, a distributed analog to the widely used h5py library. Users interact with the datasets (variables) and slice the data from its (time x longitude x latitude) cube form as they see fit.
Examples and use cases are described in a set of Jupyter notebooks and available on GitHub:
Now you have a Jupyter notebook server running on your EC2 server.
From your laptop, create an SSH tunnel:
$ ssh –L 8888:localhost:8888 (IP address of the EC2 server)
Now, you can browse to localhost:8888 using the correct token, and interact with the notebooks as if they were local. Within the directory, there are examples for accessing the HSDS API and plotting wind and weather data using matplotlib.
Controlling access and defraying costs
A final concern is rate limiting and access control. Although the HSDS service is scalable and relatively robust, we had a few practical concerns:
How can we protect from malicious or accidental use that may lead to high egress fees (for example, someone who attempts to repeatedly download the entire dataset from S3)?
How can we keep track of who is using the data both to document the value of the data resource and to justify the costs?
If costs become too high, can we charge for some or all API use to help cover the costs?
To approach these problems, we investigated using Amazon API Gateway and its simplified integration with the AWS Marketplace for SaaS monetization as well as third-party API proxies.
In the end, we chose to use API Umbrella due to its close involvement with http://data.gov. While AWS Marketplace is a compelling option for future datasets, the decision was made to keep this dataset entirely open, at least for now. As community use and associated costs grow, we’ll likely revisit Marketplace. Meanwhile, API Umbrella provides controls for rate limiting and API key registration out of the box and was simple to implement as a front-end proxy to HSDS. Those applications that may want to charge for API use can accomplish a similar strategy using Amazon API Gateway and AWS Marketplace.
Ongoing work and other resources
As NREL and other government research labs, municipalities, and organizations try to share data with the public, we expect many of you will face similar challenges to those we have tried to approach with the architecture described in this post. Providing large datasets is one challenge. Doing so in a way that is affordable and convenient for users is an entirely more difficult goal. Using AWS cloud-native services and the existing foundation of the HDF file format has allowed us to tackle that challenge in a meaningful way.
Dr. Caleb Phillips is a senior scientist with the Data Analysis and Visualization Group within the Computational Sciences Center at the National Renewable Energy Laboratory. Caleb comes from a background in computer science systems, applied statistics, computational modeling, and optimization. His work at NREL spans the breadth of renewable energy technologies and focuses on applying modern data science techniques to data problems at scale.
Dr. Caroline Draxl is a senior scientist at NREL. She supports the research and modeling activities of the US Department of Energy from mesoscale to wind plant scale. Caroline uses mesoscale models to research wind resources in various countries, and participates in on- and offshore boundary layer research and in the coupling of the mesoscale flow features (kilometer scale) to the microscale (tens of meters). She holds a M.S. degree in Meteorology and Geophysics from the University of Innsbruck, Austria, and a PhD in Meteorology from the Technical University of Denmark.
John Readey has been a Senior Architect at The HDF Group since he joined in June 2014. His interests include web services related to HDF, applications that support the use of HDF and data visualization.Before joining The HDF Group, John worked at Amazon.com from 2006–2014 where he developed service-based systems for eCommerce and AWS.
Jordan Perr-Sauer is an RPP intern with the Data Analysis and Visualization Group within the Computational Sciences Center at the National Renewable Energy Laboratory. Jordan hopes to use his professional background in software engineering and his academic training in applied mathematics to solve the challenging problems facing America and the world.
The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.
This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:
Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.
Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.
Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.
In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.
Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.
Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.
During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.
Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.
Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.
He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.
Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.
You can follow him on Twitter or connect with him on LinkedIn.
To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.
Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.
“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.
In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.
Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.
To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.
I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.
At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.
Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.
Play with Raspberry Pi and quit your job
I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.
I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.
solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity
One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.
For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.
I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.
Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.
Where to start?
I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.
Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.
HTML and CSS are languages for describing, structuring, and styling web pages
Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
Git is version control software that helps you to work on your own projects and collaborate with other developers
Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out
Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.
Should we host in the cloud or on our own servers? This question was at the center of Dmytro Dyachuk’s talk, given during KubeCon + CloudNativeCon last November. While many services simply launch in the cloud without the organizations behind them considering other options, large content-hosting services have actually moved back to their own data centers: Dropbox migrated in 2016 and Instagram in 2014. Because such transitions can be expensive and risky, understanding the economics of hosting is a critical part of launching a new service. Actual hosting costs are often misunderstood, or secret, so it is sometimes difficult to get the numbers right. In this article, we’ll use Dyachuk’s talk to try to answer the “million dollar question”: “buy or rent?”
The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.
Prehistory
Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.
BCM2763 micro DB
Ubuntu on Raspberry Pi, 2011-style
A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.
Raspberry Pi alpha board
We shot some great demos with this board, including this video of Quake III:
A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.
Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.
Beta boards on parade
Here’s Dom, demoing both the board and his excellent taste in movie trailers:
See http://www.raspberrypi.org/ for more details, FAQ and forum.
Launch
Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.
The first 2000 Raspberry Pis
The first Raspberry Pi from the first box from the first pallet
We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.
Evolution
Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.
Revision 1 versus revision 2
If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.
Celebrating 500,000 Welsh units, back when that seemed like a lot
Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.
A UK-built Raspberry Pi Model A
In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.
The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.
New toys
While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.
We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.
Nobody mention the xenon death flash.
And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.
All our recent products are just the same board shot from different angles
Zero to hero
Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.
The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.
Cheap thrills
MagPi issue 40 in all its glory
We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.
RS Components limited-edition blue Raspberry Pi 1 Model B
Brazilian-market Raspberry Pi 3 Model B
Visible-light Camera Module v2
Learning about injection moulding the hard way
250 pages of content each month, every month
Essential reading
Forward the Foundation
Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.
We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.
If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.
Този закон е приет през 2000 г. за депозиране на печатни издания с цел съхраняване на културното наследство. Известен е само в специализирани среди.
През 2010 г. първото правителство на ГЕРБ по време на поредна кампания за прозрачност на собствеността на медиите допълни закона с чл.7а – за създаване на регистър на действителните собственици към Министерството на културата и ежегодно деклариране на собствеността. Писах тогава в Дневник:
Текстът не показва предвиждат ли се мерки за осветяване на лица, обвързани с какви ли не договори с представящите се за собственици. Такива мерки са изключително необходими. В проекта няма текст за установяване на произход на средствата. Значи ли това, че няма да се предприема проверка за произхода на средствата, с които се придобива дял или цяло дружество?
Така и стана.
Пример – писмо от журналистите в Класа “до издателя и собственика в сянка Красимир Гергов”. Впрочем основен играч през прехода е консултантът – консултантите са неуловими и от 7а, и от новите предложения 7б и 7в. Вероятно така и трябва.
Пример – депутатът Делян Пеевски говори за “моите медии” година преди да се чуе, че има сделка, с която му е прехвърлена собственост, и много преди името му да е появи в карето на печатните издания. Незабравимо е интервюто на Пеевски, в което той демонстрира връзката медии- власт и медийна мощ:
молбата на Цветан Цветанов – ще го кажа в прав текст – бе да осъществявам чадър в моите медии над няколко престъпни босове, с които той и Станимир Флоров били близки.
Пример за неосветената страна на медиите на Пеевски е оповестената сделка с Патрик Халпени (“Да, потвърждавам, че имам сделка с “Нова Българска медийна група холдинг” за придобиването на дяловете на компаниите, които притежават вестниците “Монитор”, “Телеграф”, “Политика”, “Меридиан мач” и “Борба”, казва Халпени в интервю за Капитал). Има и разрешение от КЗК – какво стана с прехвърлянето на собствеността? А може би след като през лятото на 2014 не се реализира предвиденият сценарий с ДАНС, отпадна нуждата и от фиктивна сделка?
Закони има, изисквания има – не е като да няма – но чл.7а не промени медийната среда.
Осем години след появата на чл.7а през изминаващата седмица беше огласен нов законопроект – с нови две разпоредби – 7б и 7в – в същия Закон за задължителното депозиране на печатни издания. Делян Пеевски – който е задължено лице по чл.7а – и проверимо не е спазвал разпоредбата – вече е подреден откъм изискващите, откъм носителите на стандарти.
2018 – чл. 7б и чл.7в
Какво предвижда законопроектът на Пеевски – Цонев – Кръстева- Хамид за изменение на Закона за задължителното депозиране:
Чл.7б Лица, които разпространяват и продават печатни издания, подават декларация в МК за действителен собственик и брой обекти за продажба на дребно. Ако притежават повече от 1/3 от декларираните обекти, МК уведомява КЗК.
Чл 7в Доставчиците на медийни услуги декларират информация за действителен собственик, всяко финансиране, размер и основание на всяко финансиране.
Доставчик на медийни услуги се дефинира с препратка към дефиницията в Изборния кодекс.
Финансиране се дефинира като получаване на средства и имущество извън приходите от обичайна дейност, и всички заеми – без банкови кредити.
В мотивите на законопроекта се говори за прозрачност, но по-ясно разкрива мотивите откритото писмо на Пеевски до медиите, там се казва:
Законопроектът ще преустанови и спекулациите на определен кръг медии, финансирани основно от подсъдими лица или чуждестранни грантове, че в България медийната среда е непрозрачна и че собствеността в медиите била неясна. Същият кръг от медии, вече повече от десет години непрестанно генерира фалшиви новини за мен и издателския ми бизнес – когато крият действителния си собственик или се финансират непазарно и обслужват конюнктурните интереси на своите български или чуждестранни финансови благодетели.
Десет години? – “за мен и издателския ми бизнес”? – през тези години в кой точно регистър беше Пеевски – като има изискване за издателите от 2010 г.?
А за предлагания законопроект – той има символен характер, но все пак:
от гледна точка на законодателна техника: Законът за нормативните актове не позволява в закон с един предмет да се уреждат произволни задължения с друг предмет. Продавачи и разпространители на печатни издания се появяват в закон, целящ съхраняване на културното наследство.
от гледна точка на съдържанието: ще кажа, каквото преди години за чл.7а – неефективен инструмент, неясни дефиниции, целта видимо не е да се осветлят влиянията и контрола – защото как осветляване без рекламодатели и без банки, и без разпределянето на европейските средства.
Вчера се състоя кръгла маса за дистанционното електронно гласуване, организирана от обществения съвет на ЦИК. Поводът – анализът, който ЦИК изпрати до парламента. Анализът беше разтълкуван като желание от страна на ЦИК да отложи електронното гласуване. Председателят на ЦИК, г-жа Алексиева, се разграничи от такива тълкувания, като каза, че ЦИК просто иска да бъдат синхронизирани срокове в различни закони.
В същото време, заданието за системата за електронно дистанционно гласуване ще бъде публикувано за обществено обсъждане съвсем скоро от Държавна агенция „Електронно управление“.
Причината, поради която смятам за нужно да пиша отново по тази тема е както самия анализ на ЦИК, така и продължаващото сеене на страх от проф. Михаил Константинов относно информационната сигурност. Да започнем от твърденията на проф. Константинов.
Чиповете в картите (на естонците) са пробити и това компрометира гласуването
Да, имаше много неприятен бъг в един модел чипове на Infineon, който на практика позволява да се постави електронен подпис (което включва подаване на глас) от ваше име без вие да разберете. Няколко детайла, обаче:
Различни статии споменават различни оценки за цената на атаката (варират около 20 хиляди евро), но дори след сериозни оптимизации тя е над няколко хиляди евро. На карта. Това го прави скъпо удоволствие, но все пак възможно
Ако се използва двуфакторна автентикация (напр. предварително регистриран телефонен номер), атаката става още по-непрактична за извършване в мащаб, защото освен разбиването на ключа, трябва достъп до телефона
Изборният кодекс предвижда правила за спиране на електронното гласуване при узнаване на сериозен проблем със сигурността. Тъй като то е дни преди изборния ден, избирателите просто биват помолени да отидат до секция.
Т.е. практичните измерения на такава атака са далеч от „вотът е тотално компрометиран“. Да, би създало риск за доверието в системата, което е основно за изборния процес, но ако повтаряме, че всичко е пробито, правим това недоверие самоизпълняващо се пророчество. А Естония вече е решила проблема с картите.
„Франция прекрати електронното гласуване“
След изтеклите мейли на Макрон и общата обстановка с предполагаемото руско влияние върху изборния процес, Франция временно отменя електронното си гласуване за миналите избори. Това НЕ значи, че ги е прекратила. Макрон изрично заявява, че за изборите през 2022-ра ще има електронно гласуване отново.
Трябва да отбележим, че няма 100% сигурна система – нито електронна, нито хартиена. Това ни казва и анализа на киберсигурността на електронното гласуване в европейския журнал по киберсигурност. Но същият анализ казва, че ако се вземат технологични и организационни мерки, рисковете могат да бъдат минимизирани и преодолени.
Но да преминем към анализа на ЦИК. Там има доста точки, които са или силно дискусионни, или неаргументирани или необосновани. Не казвам, че това е нарочно или злонамерено. Можем да го разгледаме донякъде като консервативно – а институция като ЦИК трябва да е консервативна. Приемам като своя грешка, че не съм се включил по-рано в експертните дискусии и ще опитам да изпратя критиката си до членовете на ЦИК, така че да се намери най-доброто решение.
Ето някои откъси от доклада:
В същото време дистанционното електронно гласуване, макар и удобно за потребителя (user friendly), изисква специфични компютърни умения. За разлика от останалите електронни услуги, при електронното гласуване има допълнителни стъпки, свързани с гарантиране на сигурността
Не смятам, че изисква компютърни умения различни от тези за използване на електронни услуги или на интернет банкиране. Вкарваш карта, пишеш ПИН, въвеждаш код, получен на телефона си. Най-стандартен процес.
Независимо от възможността да се гласува многократно дистанционно, рискът от контролиран вот при дистанционното електронно гласуване е по-голям, отколкото например при гласуването с хартиени бюлетини
Предвид, че добрите практики предвиждат възможност да отидеш да гласуваш на хартия след като си гласувал електронно, то контролираният вот чрез електронно гласуване не се различава особено от този при хартиеното. И той се базира на страх и зависимост, а не не технологии и процеси. Хората гласуват за когото им кажат дори в тъмната стаичка, защото после тези неща се броят.
Тук трябва да отбележим, че в сегашните текстове на кодекса липсва възможност за гласуване на хартия след електронно. В работната група по Изборния кодекс парламента имаше такива разписани текстове и уточнен процес, но те не бяха включени, тъй като сегашните разпоредби се отнасят само за експериментите до 2019-та. Т.е. няма причина да бъде включена тази опция. Но при въвеждането на преходните текстове в тялото на закона, такава процедура трябва да бъде добавена.
Въведените с ИК алтернативни възможности за избирателя да гласува с хартиени бюлетини, да гласува машинно и да гласува дистанционно, които съществуват паралелно, ще оскъпят организирането и произвеждането на избори, както и ще доведат до чувствително забавяне на процесите по отчитане на резултатите.
За машинното съм съгласен, че оскъпява процеса. Дистанционното електронното обаче е различен случай. Освен, че вече има осигурено финансиране (1.5 милиона лева) по оперативна програма „Добро управление“ за изграждане и внедряване на системата, самото провеждане на изборите с възможност за електронно гласуване няма причина да оскъпи процеса значително. Към становището липсват разчети за това оскъпяване (а бюджетът на ЦИК не е променен спрямо този за миналата година макар че се очаква да провеждат експериментни), така че не бих се съгласил с този извод.
С оглед изложеното по т. 4 към настоящия момент от възможните начини на идентификация единственият донякъде надежден начин е идентификация с квалифициран електронен подпис, издаден не по описания по т. 4. ред, която за целите на дистанционното електронно гласуване не е предвидена в Изборния кодекс
Това беше ключова дискусионна тема на кръглата маса вчера. Според прочита на ЦИК единствено Законът за електронната идентификация е приложим. Според моя (а и не само моя) прочит, като участвал в работната група по писането на тези текстове, идеята не е била ограничаваща. В кодекса се споменава Регламент (ЕС) 910/2014, който позволява трансгранична електронна идентификация. Т.е. българи в рамките на ЕС, които имат издадени електронни идентификатори от други държави-членки, чрез инфраструктурата по регламента ще могат да се идентифицират (ако нивото на осигуреност на носителя е „високо“) – например българи в Австрия, имащи австрийска електронна идентификация. Според мен идентификацията може да бъде извършена и чрез квалифициран електронен подпис, макар и не директно, а чрез извличане на ЕГН (и подписването му) от сертификата. Този метод е допустим и според наредба към Закона за електронното управление. ЦИК изглежда не са съгласни, та трябва допълнително да се разгледат конкретните текстове.
С квалифициран електронен подпис, обаче, разполагат сравнително малко на брой граждани
Тук това „обаче“ подсказва за една несигурност в предходното становище. „Обаче“ значи, че в предното изречение е казано, че квалифицираният електронен подпис е допустим, „обаче“, има сравнително малко хора, които имат такъв. Така или иначе липсва точна бройка и не е ясно дали тази оценка е била „на око“ или реално са попитали Комисията за регулиране на съобщенията за тази информация. Аз бих попитал КРС, но по интуиция бройката на подписите би трябвало да е поне 40-50 хиляди. Напълно достатъчно за провеждане както на експерименти, така и на първите няколко избора. Убеден съм, също така, че доставчиците на електронни подписи биха издавали безплатни такива с ограничен срок на действие. Гласуването с КЕП наистина не следва да бъде универсално решение, но за спазване на сроковете в кодекса (т.е. европейски избори 2019) е напълно достатъчно.
Реализацията на проект „Изграждане и внедряване на система за дистанционно електронно гласуване“ [..] проект „Развитие на пилотната система за електронна идентификация и внедряване в продуктивен режим“ [..] и проект “Реализиране на ЦАИС „Гражданска регистрация“ и ЦАИС „Адресен регистър“ [..] трябва да се разглеждат в съвкупност и дейностите по тях да се синхронизират, защото са взаимно свързани. Ако се разглеждат и се развиват поотделно, няма да се получи единна, работеща система. Ето защо в дейност 2 на проекта с бенефициент ДАЕУ и партньор ЦИК е заложено, че системата за дистанционно електронно гласуване трябва да е синхронизирана с националната система за електронна идентификация и Национална база данни „Население“, съответно с ЦАИС „Гражданска регистрация“
Това би било идеалният вариант – двата регистъра и системата за електронна идентификация да са готови и тогава да правим електронно гласуване. Такава и беше концепцията, когато писахме пътната карта за е-управление. За съжаление проектите се забавиха доста. Но проектът за дистанционно електронно гласуване не зависи строго от изброените проекти. Настоящата система на ГРАО (Национална база данни население) и националния класификатор на адресите могат да свършат необходимата за целите на гласуването работа. А проектът за елекетронна автентикация в ДАЕУ предоставя същите интерфейси, които бъдещият „център за електронна идентификация“ ще предоставя, т.е. интеграцията с него на практика значи, че няма нужда да се чака проекта на МВР. Това са много оперативни детайли, но са важни за да придобием пълна представа.
Системата за дистанционно електронно гласуване на Естония е сертифицирана от частна международна компания – KPMG. Тъй като произвеждането на избори като база на демокрацията е и елемент на националната сигурност ЦИК счита, че въвеждането на повсеместно дистанционно електронно гласуване не би било надеждно при липса на държавен орган или организация, която да гарантира сигурността на системата в съответствие с изискванията на закона
Би било добре държавен орган (напр. ДАНС) да провери системата, но това далеч не трябва да е единственият одит и дори не трябва да е водещия. Важно е да отбележим, че почти всички държавни органи с потенциал за такъв одит са в рамките на изпълнителната власт, т.е. наличен е конфликт на интереси – изпълнителната власт сертифицира система за провеждане на избори. В този смисъл, държавен одитиращ орган не е задължителният компонент – задължителен е външният одитор.
В последната точка от анализа си ЦИК предават на практика думите на проф. Константинов, които адресирах по-горе.
Тези критики реално променят анализа и поради това ще опитам да ги сведа до знанието на ЦИК максимално бързо. Съветът ми би бил да се публикува допълнение към анализа с някои уточнения и разглеждане на различни хипотези, така че Народното събрание да не се окаже (или да не се оправдае, че е) подведено.
Промени и прецизирания на Изборния кодекс ще трябват така или иначе, особено след проведените експертименти. Но може вместо да искаме отлагане, да разпишем едни конкретни стъпки по въвеждането – например „първите с КЕП (и двуфакторна автентикация), след това включваме електронната идентификация, като стане, след това интегрираме новото ГРАО“. Софтуерът не е нещо, което се прави веднъж и се забравя – той се развива постоянно. Такава поетапност и плавност ще позволи да избегнем рисковете и най-важното – да повишим доверието в системата (а то в момента, за съжаление, е доста ниско).
Microservice-application requirements have changed dramatically in recent years. These days, applications operate with petabytes of data, need almost 100% uptime, and end users expect sub-second response times. Typical N-tier applications can’t deliver on these requirements.
Reactive Manifesto, published in 2014, describes the essential characteristics of reactive systems including: responsiveness, resiliency, elasticity, and being message driven.
Being message driven is perhaps the most important characteristic of reactive systems. Asynchronous messaging helps in the design of loosely coupled systems, which is a key factor for scalability. In order to build a highly decoupled system, it is important to isolate services from each other. As already described, isolation is an important aspect of the microservices pattern. Indeed, reactive systems and microservices are a natural fit.
Implemented Use Case This reference architecture illustrates a typical ad-tracking implementation.
Many ad-tracking companies collect massive amounts of data in near-real-time. In many cases, these workloads are very spiky and heavily depend on the success of the ad-tech companies’ customers. Typically, an ad-tracking-data use case can be separated into a real-time part and a non-real-time part. In the real-time part, it is important to collect data as fast as possible and ask several questions including:, “Is this a valid combination of parameters?,””Does this program exist?,” “Is this program still valid?”
Because response time has a huge impact on conversion rate in advertising, it is important for advertisers to respond as fast as possible. This information should be kept in memory to reduce communication overhead with the caching infrastructure. The tracking application itself should be as lightweight and scalable as possible. For example, the application shouldn’t have any shared mutable state and it should use reactive paradigms. In our implementation, one main application is responsible for this real-time part. It collects and validates data, responds to the client as fast as possible, and asynchronously sends events to backend systems.
The non-real-time part of the application consumes the generated events and persists them in a NoSQL database. In a typical tracking implementation, clicks, cookie information, and transactions are matched asynchronously and persisted in a data store. The matching part is not implemented in this reference architecture. Many ad-tech architectures use frameworks like Hadoop for the matching implementation.
The system can be logically divided into the data collection partand the core data updatepart. The data collection part is responsible for collecting, validating, and persisting the data. In the core data update part, the data that is used for validation gets updated and all subscribers are notified of new data.
Components and Services
Main Application The main application is implemented using Java 8 and uses Vert.x as the main framework. Vert.x is an event-driven, reactive, non-blocking, polyglot framework to implement microservices. It runs on the Java virtual machine (JVM) by using the low-level IO library Netty. You can write applications in Java, JavaScript, Groovy, Ruby, Kotlin, Scala, and Ceylon. The framework offers a simple and scalable actor-like concurrency model. Vert.x calls handlers by using a thread known as an event loop. To use this model, you have to write code known as “verticles.” Verticles share certain similarities with actors in the actor model. To use them, you have to implement the verticle interface. Verticles communicate with each other by generating messages in a single event bus. Those messages are sent on the event bus to a specific address, and verticles can register to this address by using handlers.
With only a few exceptions, none of the APIs in Vert.x block the calling thread. Similar to Node.js, Vert.x uses the reactor pattern. However, in contrast to Node.js, Vert.x uses several event loops. Unfortunately, not all APIs in the Java ecosystem are written asynchronously, for example, the JDBC API. Vert.x offers a possibility to run this, blocking APIs without blocking the event loop. These special verticles are called worker verticles. You don’t execute worker verticles by using the standard Vert.x event loops, but by using a dedicated thread from a worker pool. This way, the worker verticles don’t block the event loop.
Our application consists of five different verticles covering different aspects of the business logic. The main entry point for our application is the HttpVerticle, which exposes an HTTP-endpoint to consume HTTP-requests and for proper health checking. Data from HTTP requests such as parameters and user-agent information are collected and transformed into a JSON message. In order to validate the input data (to ensure that the program exists and is still valid), the message is sent to the CacheVerticle.
This verticle implements an LRU-cache with a TTL of 10 minutes and a capacity of 100,000 entries. Instead of adding additional functionality to a standard JDK map implementation, we use Google Guava, which has all the features we need. If the data is not in the L1 cache, the message is sent to the RedisVerticle. This verticle is responsible for data residing in Amazon ElastiCache and uses the Vert.x-redis-client to read data from Redis. In our example, Redis is the central data store. However, in a typical production implementation, Redis would just be the L2 cache with a central data store like Amazon DynamoDB. One of the most important paradigms of a reactive system is to switch from a pull- to a push-based model. To achieve this and reduce network overhead, we’ll use Redis pub/sub to push core data changes to our main application.
Vert.x also supports direct Redis pub/sub-integration, the following code shows our subscriber-implementation:
vertx.eventBus().<JsonObject>consumer(REDIS_PUBSUB_CHANNEL_VERTX, received -> {
JsonObject value = received.body().getJsonObject("value");
String message = value.getString("message");
JsonObject jsonObject = new JsonObject(message);
eb.send(CACHE_REDIS_EVENTBUS_ADDRESS, jsonObject);
});
redis.subscribe(Constants.REDIS_PUBSUB_CHANNEL, res -> {
if (res.succeeded()) {
LOGGER.info("Subscribed to " + Constants.REDIS_PUBSUB_CHANNEL);
} else {
LOGGER.info(res.cause());
}
});
The verticle subscribes to the appropriate Redis pub/sub-channel. If a message is sent over this channel, the payload is extracted and forwarded to the cache-verticle that stores the data in the L1-cache. After storing and enriching data, a response is sent back to the HttpVerticle, which responds to the HTTP request that initially hit this verticle. In addition, the message is converted to ByteBuffer, wrapped in protocol buffers, and send to an Amazon Kinesis Data Stream.
The following example shows a stripped-down version of the KinesisVerticle:
Kinesis Consumer This AWS Lambda function consumes data from an Amazon Kinesis Data Stream and persists the data in an Amazon DynamoDB table. In order to improve testability, the invocation code is separated from the business logic. The invocation code is implemented in the class KinesisConsumerHandler and iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to protocol buffers and converted into a Java object. Those Java objects are passed to the business logic, which persists the data in a DynamoDB table. In order to improve duration of successive Lambda calls, the DynamoDB-client is instantiated lazily and reused if possible.
Redis Updater From time to time, it is necessary to update core data in Redis. A very efficient implementation for this requirement is using AWS Lambda and Amazon Kinesis. New core data is sent over the AWS Kinesis stream using JSON as data format and consumed by a Lambda function. This function iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to String and converted into a Java object. The Java object is passed to the business logic and stored in Redis. In addition, the new core data is also sent to the main application using Redis pub/sub in order to reduce network overhead and converting from a pull- to a push-based model.
The following example shows the source code to store data in Redis and notify all subscribers:
public void updateRedisData(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {
try {
ObjectMapper mapper = new ObjectMapper();
String jsonString = mapper.writeValueAsString(trackingMessage);
Map<String, String> map = marshal(jsonString);
String statusCode = jedis.hmset(trackingMessage.getProgramId(), map);
}
catch (Exception exc) {
if (null == logger)
exc.printStackTrace();
else
logger.log(exc.getMessage());
}
}
public void notifySubscribers(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {
try {
ObjectMapper mapper = new ObjectMapper();
String jsonString = mapper.writeValueAsString(trackingMessage);
jedis.publish(Constants.REDIS_PUBSUB_CHANNEL, jsonString);
}
catch (final IOException e) {
log(e.getMessage(), logger);
}
}
Similarly to our Kinesis Consumer, the Redis-client is instantiated somewhat lazily.
Infrastructure as Code As already outlined, latency and response time are a very critical part of any ad-tracking solution because response time has a huge impact on conversion rate. In order to reduce latency for customers world-wide, it is common practice to roll out the infrastructure in different AWS Regions in the world to be as close to the end customer as possible. AWS CloudFormation can help you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
You create a template that describes all the AWS resources that you want (for example, Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. Our reference architecture can be rolled out in different Regions using an AWS CloudFormation template, which sets up the complete infrastructure (for example, Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Container Service (Amazon ECS) cluster, Lambda functions, DynamoDB table, Amazon ElastiCache cluster, etc.).
Conclusion In this blog post we described reactive principles and an example architecture with a common use case. We leveraged the capabilities of different frameworks in combination with several AWS services in order to implement reactive principles—not only at the application-level but also at the system-level. I hope I’ve given you ideas for creating your own reactive applications and systems on AWS.
About the Author
Sascha Moellering is a Senior Solution Architect. Sascha is primarily interested in automation, infrastructure as code, distributed computing, containers and JVM. He can be reached at [email protected]
This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.
Dr Lucy Rogers calls herself a Transformer. “I transform simple electronics into cool gadgets, I transform science into plain English, I transform problems into opportunities. I am also a catalyst. I am interested in everything around me, and can often see ways of putting two ideas from very different fields together into one package. If I cannot do this myself, I connect the people who can.”
Among many other projects, Dr Lucy Rogers currently focuses much of her attention on reducing the damage from space debris
It’s a pretty wide range of interests and skills for sure. But it only takes a brief look at Lucy’s résumé to realise that she means it. When she says she’s interested in everything around her, this interest reaches from electronics to engineering, wearable tech, space, robotics, and robotic dinosaurs. And she can be seen talking about all of these things across various companies’ social media, such as IBM, websites including the Women’s Engineering Society, and books, including her own.
With her bright LED boots, Lucy was one of the wonderful Pi community members invited to join us and HRH The Duke of York at St James’s Palace just over a year ago
When not attending conferences as guest speaker, tinkering with electronics, or creating engaging IoT tutorials, she can be found retrofitting Raspberry Pis into the aforementioned robotic dinosaurs at Blackgang Chine Land of Imagination, writing, and judging battling bots for the BBC’s Robot Wars.
First broadcast in the UK between 1998 and 2004, Robot Wars was revived in 2016 with a new look and new judges, including Dr Lucy Rogers. Competitors battle their home-brew robots, and Lucy, together with the other two judges, awards victories among the carnage of robotic remains
Lucy graduated from Lancaster University with a degree in Mechanical Engineering. After that, she spent seven years at Rolls-Royce Industrial Power Group as a graduate trainee before becoming a chartered engineer and earning her PhD in bubbles.
Bubbles?
“Foam formation in low‑expansion fire-fighting equipment. I investigated the equipment to determine how the bubbles were formed,” she explains. Obviously. Bubbles!
Lucy graduated from the Singularity University Graduate Studies Program in 2011, focusing on how robotics, nanotech, medicine, and various technologies can tackle the challenges facing the world
She then went on to become a fellow of the Royal Astronomical Society (RAS) in 2005 and, later, a fellow of both the Institution of Mechanical Engineers (IMechE) and British Interplanetary Society. As a member of the Association of British Science Writers, Lucy wrote It’s ONLY Rocket Science: an Introduction in Plain English.
In It’s Only Rocket Science: An Introduction in Plain English Lucy explains that ‘hard to understand’ isn’t the same as ‘impossible to understand’, and takes her readers through the journey of building a rocket, leaving Earth, and travelling the cosmos
As a standout member of the industry, and all-round fun person to be around, Lucy has quickly established herself as a valued member of the Pi community.
In 2014, with the help of Neil Ford and Andy Stanford-Clark, Lucy worked with the UK’s oldest amusement park, Blackgang Chine Land of Imagination, on the Isle of Wight, with the aim of updating its animatronic dinosaurs. The original Blackgang Chine dinosaurs had a limited range of behaviour: able to roar, move their heads, and stomp a foot in a somewhat repetitive action.
When she contacted Raspberry Pi back in the November of that same year, the team were working on more creative, varied behaviours, giving each dinosaur a new Raspberry Pi-sized brain. This later evolved into a very successful dino-hacking Raspberry Jam.
Lucy, Neil Ford, and Andy Stanford-Clark used several Raspberry Pis and Node-RED to visualise flows of events when updating the robotic dinosaurs at Blackgang Chine. They went on to create the successful WightPi Raspberry Jam event, where visitors could join in with the unique hacking opportunity.
Given her love for tinkering with tech, and a love for stand-up comedy that can be uncovered via a quick YouTube search, it’s no wonder that Lucy was asked to help judge the first round of the ‘Make us laugh’ Pioneers challenge for Raspberry Pi. Alongside comedian Bec Hill, Code Club UK director Maria Quevedo, and the face of the first challenge, Owen Daughtery, Lucy lent her expertise to help name winners in the various categories of the teens event, and offered her support to future Pioneers.
If you’re an educator from the UK, chances are you’ve heard of Bett. For everyone else: Bett stands for British Education Technology Tradeshow. It’s the El Dorado of edtech, where every street is adorned with interactive whiteboards, VR headsets, and new technologies for the classroom. Every year since 2014, the Raspberry Pi Foundation has been going to the event hosted in the ExCeL London to chat to thousands of lovely educators about our free programmes and resources.
On a mission
Our setup this year consisted of four pods (imagine tables on steroids) in the STEAM village, and the mission of our highly trained team of education agents was to establish a new world record for Highest number of teachers talked to in a four-day period. I’m only half-joking.
Educators with a mission
Meeting educators
The best thing about being at Bett is meeting the educators who use our free content and training materials. It’s easy to get wrapped up in the everyday tasks of the office without stopping to ask: “Hey, have we asked our users what they want recently?” Events like Bett help us to connect with our audience, creating some lovely moments for both sides. We had plenty of Hello World authors visit us, including Gary Stager, co-author of Invent to Learn, a must-read for any computing educator. More than 700 people signed up for a digital subscription, we had numerous lovely conversations about our content and about ideas for new articles, and we met many new authors expressing an interest in writing for us in the future.
We also talked to lots of Raspberry Pi Certified Educators who we’d trained in our free Picademy programme — new dates in Belfast and Dublin now! — and who are now doing exciting and innovative things in their local areas. For example, Chris Snowden came to tell us about the great digital making outreach work he has been doing with the Eureka! museum in Yorkshire.
Raspberry Pi Certified Educator Chris Snowden
Digital making for kids
The other best thing about being at Bett is running workshops for young learners and seeing the delight on their faces when they accomplish something they believed to be impossible only five minutes ago. On the Saturday, we ran a massive Raspberry Jam/Code Club where over 250 children, parents, and curious onlookers got stuck into some of our computing activities. We were super happy to find out that we’d won the Bett Kids’ Choice Award for Best Hands-on Experience — a fantastic end to a busy four days. With Bett over for another year, our tired and happy ‘rebel alliance’ from across the Foundation still had the energy to pose for a group photo.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.