Tag Archives: uac

ЕСПЧ: анонимно разпространение на снимки на бюлетини

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/03/09/art_10_app_elections/

На 23 януари 2018 г. Европейският съд по правата на човека (ЕКПЧ) постанови решението си по делото Magyar Kétfarkú Kutya Párt срещу Унгария относно мобилно приложение (“app”), което позволява на гласоподавателите да споделят   анонимни снимки на техните бюлетини.

Съдът е постановил, че глоба, наложена на политическа партия за разпространение на мобилното приложение,  нарушава правото на партията на свобода на изразяване.

Жалбоподателят в случая е унгарската политическа партия Magyar Kétfarkú Kutya Párt. Три дни преди  референдум в Унгария жалбоподателят предоставя мобилното приложение на разположение на избирателите. Приложението позволява  на гласоподавателите да качват и да споделят снимки  на техните бюлетини  и също така дава  възможност на гласоподавателите да обяснят причините за това как те гласуват. Публикуването и споделянето на снимки е анонимно.  Националната избирателна комисия се произнася, че приложението нарушава принципите за справедливост на изборите, тайна на гласуването и правилното упражняване на правата и налага глоба от 2 700 евро.   Унгарският върховен съд  потвърждава решението на Комисията относно нарушение на принципа за правилното упражняване на правата и отменя останалата част от решението,  тъй като няма регулиране, забраняващо на гласоподавателите да правят снимки  в кабините за гласуване, при това   тяхната самоличност  не може да бъде открита чрез мобилното приложение. Има и обществен интерес от документиране на проблеми в хода на протичане на гласуването.

ЕСПЧ  прилага теста за пропорционалност.

  • Дали  има намеса в правото на жалбоподателя на свобода на изразяване – отговорът е положителен.
  • Дали намесата  има легитимна цел.  Правителството не е представило свидетелства, че в процедурата за гласуване е възникнал проблем поради публикуването на изображения на тези бюлетини – което да налага ограничаване на използването на мобилното приложение.  Приложението за мобилен телефон  притежава комуникативна стойност и   представлява средство за изразяване по въпрос от обществен интерес, защитен от член 10 от Конвенцията. Действията, предприети от жалбоподателя, се ползват със защита съгласно член 10 § 1 от Конвенцията и  следователно  санкционирането му нарушава правото на свобода на изразяване.

Санкцията, наложена   за експлоатацията на мобилното приложение, не отговаря на изискванията на член 10, параграф 2 ЕКПЧ.

Нарушение на член 10 от Конвенцията.

Creating a Cost-Efficient Amazon ECS Cluster for Scheduled Tasks

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/creating-a-cost-efficient-amazon-ecs-cluster-for-scheduled-tasks/

Madhuri Peri
Sr. DevOps Consultant

When you use Amazon Relational Database Service (Amazon RDS), depending on the logging levels on the RDS instances and the volume of transactions, you could generate a lot of log data. To ensure that everything is running smoothly, many customers search for log error patterns using different log aggregation and visualization systems, such as Amazon Elasticsearch Service, Splunk, or other tool of their choice. A module needs to periodically retrieve the RDS logs using the SDK, and then send them to Amazon S3. From there, you can stream them to your log aggregation tool.

One option is writing an AWS Lambda function to retrieve the log files. However, because of the time that this function needs to execute, depending on the volume of log files retrieved and transferred, it is possible that Lambda could time out on many instances.  Another approach is launching an Amazon EC2 instance that runs this job periodically. However, this would require you to run an EC2 instance continuously, not an optimal use of time or money.

Using the new Amazon CloudWatch integration with Amazon EC2 Container Service, you can trigger this job to run in a container on an existing Amazon ECS cluster. Additionally, this would allow you to improve costs by running containers on a fleet of Spot Instances.

In this post, I will show you how to use the new scheduled tasks (cron) feature in Amazon ECS and launch tasks using CloudWatch events, while leveraging Spot Fleet to maximize availability and cost optimization for containerized workloads.

Architecture

The following diagram shows how the various components described schedule a task that retrieves log files from Amazon RDS database instances, and deposits the logs into an S3 bucket.

Amazon ECS cluster container instances are using Spot Fleet, which is a perfect match for the workload that needs to run when it can. This improves cluster costs.

The task definition defines which Docker image to retrieve from the Amazon EC2 Container Registry (Amazon ECR) repository and run on the Amazon ECS cluster.

The container image has Python code functions to make AWS API calls using boto3. It iterates over the RDS database instances, retrieves the logs, and deposits them in the S3 bucket. Many customers choose these logs to be delivered to their centralized log-store. CloudWatch Events defines the schedule for when the container task has to be launched.

Walkthrough

To provide the basic framework, we have built an AWS CloudFormation template that creates the following resources:

  • Amazon ECR repository for storing the Docker image to be used in the task definition
  • S3 bucket that holds the transferred logs
  • Task definition, with image name and S3 bucket as environment variables provided via input parameter
  • CloudWatch Events rule
  • Amazon ECS cluster
  • Amazon ECS container instances using Spot Fleet
  • IAM roles required for the container instance profiles

Before you begin

Ensure that Git, Docker, and the AWS CLI are installed on your computer.

In your AWS account, instantiate one Amazon Aurora instance using the console. For more information, see Creating an Amazon Aurora DB Cluster.

Implementation Steps

  1. Clone the code from GitHub that performs RDS API calls to retrieve the log files.
    git clone https://github.com/awslabs/aws-ecs-scheduled-tasks.git
  2. Build and tag the image.
    cd aws-ecs-scheduled-tasks/container-code/src && ls

    Dockerfile		rdslogsshipper.py	requirements.txt

    docker build -t rdslogsshipper .

    Sending build context to Docker daemon 9.728 kB
    Step 1 : FROM python:3
     ---> 41397f4f2887
    Step 2 : WORKDIR /usr/src/app
     ---> Using cache
     ---> 59299c020e7e
    Step 3 : COPY requirements.txt ./
     ---> 8c017e931c3b
    Removing intermediate container df09e1bed9f2
    Step 4 : COPY rdslogsshipper.py /usr/src/app
     ---> 099a49ca4325
    Removing intermediate container 1b1da24a6699
    Step 5 : RUN pip install --no-cache-dir -r requirements.txt
     ---> Running in 3ed98b30901d
    Collecting boto3 (from -r requirements.txt (line 1))
      Downloading boto3-1.4.6-py2.py3-none-any.whl (128kB)
    Collecting botocore (from -r requirements.txt (line 2))
      Downloading botocore-1.6.7-py2.py3-none-any.whl (3.6MB)
    Collecting s3transfer<0.2.0,>=0.1.10 (from boto3->-r requirements.txt (line 1))
      Downloading s3transfer-0.1.10-py2.py3-none-any.whl (54kB)
    Collecting jmespath<1.0.0,>=0.7.1 (from boto3->-r requirements.txt (line 1))
      Downloading jmespath-0.9.3-py2.py3-none-any.whl
    Collecting python-dateutil<3.0.0,>=2.1 (from botocore->-r requirements.txt (line 2))
      Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
    Collecting docutils>=0.10 (from botocore->-r requirements.txt (line 2))
      Downloading docutils-0.14-py3-none-any.whl (543kB)
    Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore->-r requirements.txt (line 2))
      Downloading six-1.10.0-py2.py3-none-any.whl
    Installing collected packages: six, python-dateutil, docutils, jmespath, botocore, s3transfer, boto3
    Successfully installed boto3-1.4.6 botocore-1.6.7 docutils-0.14 jmespath-0.9.3 python-dateutil-2.6.1 s3transfer-0.1.10 six-1.10.0
     ---> f892d3cb7383
    Removing intermediate container 3ed98b30901d
    Step 6 : COPY . .
     ---> ea7550c04fea
    Removing intermediate container b558b3ebd406
    Successfully built ea7550c04fea
  3. Run the CloudFormation stack and get the names for the Amazon ECR repo and S3 bucket. In the stack, choose Outputs.
  4. Open the ECS console and choose Repositories. The rdslogs repo has been created. Choose View Push Commands and follow the instructions to connect to the repository and push the image for the code that you built in Step 2. The screenshot shows the final result:
  5. Associate the CloudWatch scheduled task with the created Amazon ECS Task Definition, using a new CloudWatch event rule that is scheduled to run at intervals. The following rule is scheduled to run every 15 minutes:
    aws --profile default --region us-west-2 events put-rule --name demo-ecs-task-rule  --schedule-expression "rate(15 minutes)"

    {
        "RuleArn": "arn:aws:events:us-west-2:12345678901:rule/demo-ecs-task-rule"
    }
  6. CloudWatch requires IAM permissions to place a task on the Amazon ECS cluster when the CloudWatch event rule is executed, in addition to an IAM role that can be assumed by CloudWatch Events. This is done in three steps:
    1. Create the IAM role to be assumed by CloudWatch.
      aws --profile default --region us-west-2 iam create-role --role-name Test-Role --assume-role-policy-document file://event-role.json

      {
          "Role": {
              "AssumeRolePolicyDocument": {
                  "Version": "2012-10-17", 
                  "Statement": [
                      {
                          "Action": "sts:AssumeRole", 
                          "Effect": "Allow", 
                          "Principal": {
                              "Service": "events.amazonaws.com"
                          }
                      }
                  ]
              }, 
              "RoleId": "AROAIRYYLDCVZCUACT7FS", 
              "CreateDate": "2017-07-14T22:44:52.627Z", 
              "RoleName": "Test-Role", 
              "Path": "/", 
              "Arn": "arn:aws:iam::12345678901:role/Test-Role"
          }
      }

      The following is an example of the event-role.json file used earlier:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                    "Service": "events.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }
    2. Create the IAM policy defining the ECS cluster and task definition. You need to get these values from the CloudFormation outputs and resources.
      aws --profile default --region us-west-2 iam create-policy --policy-name test-policy --policy-document file://event-policy.json

      {
          "Policy": {
              "PolicyName": "test-policy", 
              "CreateDate": "2017-07-14T22:51:20.293Z", 
              "AttachmentCount": 0, 
              "IsAttachable": true, 
              "PolicyId": "ANPAI7XDIQOLTBUMDWGJW", 
              "DefaultVersionId": "v1", 
              "Path": "/", 
              "Arn": "arn:aws:iam::123455678901:policy/test-policy", 
              "UpdateDate": "2017-07-14T22:51:20.293Z"
          }
      }

      The following is an example of the event-policy.json file used earlier:

      {
          "Version": "2012-10-17",
          "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ecs:RunTask"
                ],
                "Resource": [
                    "arn:aws:ecs:*::task-definition/"
                ],
                "Condition": {
                    "ArnLike": {
                        "ecs:cluster": "arn:aws:ecs:*::cluster/"
                    }
                }
            }
          ]
      }
    3. Attach the IAM policy to the role.
      aws --profile default --region us-west-2 iam attach-role-policy --role-name Test-Role --policy-arn arn:aws:iam::1234567890:policy/test-policy
  7. Associate the CloudWatch rule created earlier to place the task on the ECS cluster. The following command shows an example. Replace the AWS account ID and region with your settings.
    aws events put-targets --rule demo-ecs-task-rule --targets "Id"="1","Arn"="arn:aws:ecs:us-west-2:12345678901:cluster/test-cwe-blog-ecsCluster-15HJFWCH4SP67","EcsParameters"={"TaskDefinitionArn"="arn:aws:ecs:us-west-2:12345678901:task-definition/test-cwe-blog-taskdef:8"},"RoleArn"="arn:aws:iam::12345678901:role/Test-Role"

    {
        "FailedEntries": [], 
        "FailedEntryCount": 0
    }

That’s it. The logs now run based on the defined schedule.

To test this, open the Amazon ECS console, select the Amazon ECS cluster that you created, and then choose Tasks, Run New Task. Select the task definition created by the CloudFormation template, and the cluster should be selected automatically. As this runs, the S3 bucket should be populated with the RDS logs for the instance.

Conclusion

In this post, you’ve seen that the choices for workloads that need to run at a scheduled time include Lambda with CloudWatch events or EC2 with cron. However, sometimes the job could run outside of Lambda execution time limits or be not cost-effective for an EC2 instance.

In such cases, you can schedule the tasks on an ECS cluster using CloudWatch rules. In addition, you can use a Spot Fleet cluster with Amazon ECS for cost-conscious workloads that do not have hard requirements on execution time or instance availability in the Spot Fleet. For more information, see Powering your Amazon ECS Cluster with Amazon EC2 Spot Instances and Scheduled Events.

If you have questions or suggestions, please comment below.

UACMe – Defeat Windows User Account Control (UAC)

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/K8gCPhG8--Y/

UACme is a compiled, C-based tool which contains a number of methods to defeat Windows User Account Control commonly known as UAC. It abuses the built-in Windows AutoElevate backdoor and contains 41 methods. The tool requires an Admin account with the Windows UAC set to default settings. Usage Run executable from command line: akagi32 [Key]…

Read the full post at darknet.org.uk

Case 231: Laziness

Post Syndicated from The Codeless Code original http://thecodelesscode.com/case/231

The head monk of the Swooping Falcon Clan asked master
Banzen for assistance with a difficult customer. The
customer was a maker of silk-and-bamboo kites, and the
clan’s application allowed her to curate her large online
catalog.

“I simply cannot make her happy,” complained the
head monk.

“Tell me what makes her unhappy,” said Banzen.
“Then perhaps you can do the opposite of that.”

“Laziness,” the head monk declared; “for she says that our
interface makes her do too much work, yet the work
is her fault, not ours.”

“Explain,” said Banzen.

“First,” the head monk said, “for each kite, she wishes to
allow only certain silks. So our interface must have her
specify the silks on a kite-by-kite basis—yet always she
says this task is too tedious. It is not our fault that she
is so particular!”

“Indeed,” said Banzen.

“To make matters worse,” the head monk said, “she has
hundreds of bolts of silk in her shop, of which
dozens may be offered for any given kite! We have tried
every widget in our library—multiple-selection lists,
dual listboxes, typeahead-enabled drop-downs—yet always
she says this task is too onerous. It is not our fault that
she offers so many choices!”

“Absolutely,” said Banzen.

“Finally,” the head monk concluded, “new silks are always
being introduced and old ones are always being retired. So
she must revise the list of silks for each kite throughout
the year—yet always she says this task is too burdensome.
It is not our fault that fashion is fickle!”

“Agreed,” said Banzen.

So Banzen went to see the kite-maker.

- - -

The kite-maker’s complaints were exactly as the head monk
described. After hearing them, Banzen wandered her workshop,
and indeed found many hundreds of bolts of silk, each
a different pattern and hue.

After pondering a moment, Banzen pointed to the bamboo
skeleton of a kite on her workbench.

“What silks will you offer for that one?” asked Banzen.

“Blue cloud designs only,” said the kite-maker. “But I
have dozens of silks with blue cloud designs.”

“And that one?” asked Banzen, pointing to another.

“That is one of my ‘crow’ series,” said the kite-maker.
“Blue feather designs or black feather designs, but no
lightweight silks.”

“And that one?” asked Banzen, pointing to another.

“That is one of my ‘dragon’ series,” said the kite-maker.
“Black solids, red fire designs, or white earth designs,
but no heavy silks.”

When he was certain that he understood the kite-maker’s
algorithm, Banzen returned to the head monk.

- - -

“It is as you described,” said Banzen to the head monk.
“What makes the kite-maker unhappy is laziness.”

“Yet how can she be corrected?” asked the monk.

“Wú,” said Banzen, producing a thick envelope from his robes.
“Here is the means of correction, which I will deliver
to you each week until the kite-maker has achieved happiness.”

In one quick motion Banzen tossed the contents of the
envelope into the air. Hundreds of minuscule squares of colored
paper spiraled gently down around the room; each like a
tiny kite adrift on the wind.

“Lovely,” said the head monk, “but what is it?”

“It is called confetti, said Banzen. “I made it from your
week’s pay. Using small bills, of course.”