За някои – срещу ипотеки, за други – срещу друго

Post Syndicated from nellyo original https://nellyo.wordpress.com/2016/04/26/gmpress57/

Вестник Банкер е публикувал материал за ГМ Прес – 57 милиона за медии.  Още един фрагмент от най-новата история на българските медии –  към темата как властта си купува медиен комфорт.

 

Какво ни говореха: 

Цветан Василев  и премиерът  Борисов по времето на мощното финансиране на проправителствените медии от КТБ:

2011 – благодарение на  портрета, който Валерия Велева публикува  в Труд, знаем  какво изрично подчертава Цветан Василев:

Изрично подчертава – не финансира медии чрез банката. Това е в отговор на публикациите, че с парите на държавата финансира частни медии.

В онези години премиерът  Борисов твърди, че причината за състоянието на медиите е в малодушието на журналистите, които се съгласяват да пишат за пари:   да напуснат, да си ипотекират апартаментите, да  вземат кредит от банката – и да си направят медии:

Ако напуснат, образно казано, 50 журналисти, те могат да си направят едно сдружение, да си вземат кредит или да си съберат пари помежду си, да си направят вестник.

 

Какво се оказа: че не било точно така:

Имало е щедро кредитиране   не срещу ипотекирани апартаменти –  а срещу  поръчкови  публикации и предавания  –  и нещо повече: тези кредити  – след развитието с КТБ –  са на нашата сметка. 

2015 – в интервю за бТВ Цветан Василев вече лично очертава кръга на така-наречените-медии-финансирани-от-КТБ :

След като толкова много цялото правителство се бори да връща парите, какво прави с вестниците на Тошо Тошев, т. нар вестник на Тошо Тошев и на Даниел Руц? Какво прави с т. нар. вестници на Венелина Гочева (24 часа) и на Петьо Блъсков (Труд)? Какво прави с т. нар. вестник на Славка Бозукова и на Тодор Батков (Стандарт)? Да ги изреждам ли? Всички дължат пари. Защо не работи държавата по връщането на тези пари? Защото обслужват тези, които са на власт.

Изброените по-горе вестници не са вземали пари директно от КТБ, а  индиректно – от търговски дружества, финансирани от КТБ. Такова е Булит 2007.  Фактът, който изнася Банкер е, че

ГМ Прес дължи на финансираното от КТБ дружество  Булит 2007 сумата   56 482 703 лева. Това е потвърдено с определение  на Софийския градски съд  № 1122  от 29 март 2016, което не подлежи на обжалване.

Стандарт, Блиц, Шоу, Марица, Струма и пр. –  и едни 57 милиона подарък  – още през 2012 г. дружеството  ГМ Прес  спира да обслужва кредитите, но въпреки това са  му отпускани нови.

 

Какво предстои

В последните дни  Блиц и Пик размениха шефовете си, досегашни автори на Блиц преминават в Пик. 

Като начало Пик се отрече от Пеевски, квалифицира го като ислямист и твърди – каквото и да значи това – че

Пеевски посяга на различни медии през последните дни, тъй като е поел ангажимент пред Местан да му осигури медиен комфорт. Вероятно част от трафиците са пренасочени към новите медийни попълнения на Пеевски, обещани да работят в полза на ислямистката каузи на партията на Местан.

 

Дали разместванията са по производствени съображения или има и различия в ориентацията на собствениците (кои?) – предстои да видим.

Filed under: BG Content, BG Media, Media Law

New functional programming language can generate C, Python code for apps (InfoWorld)

Post Syndicated from ris original http://lwn.net/Articles/685170/rss

InfoWorld introduces
Futhark
, an open source functional programming language designed for
creating code that runs on GPUs. It can automatically generate both C and
Python code to be integrated with existing apps. “Most GPU programming involves using frameworks like OpenCL or CUDA, both of which use variations of C or C++ to generate code that runs on the GPU. Futhark can generate C code, but is its own language, more similar to Haskell or Standard ML than C. (Futhark is itself written in Haskell.)

Futhark’s creators claim that the expressiveness of the language makes it easier to describe complex operations that use parallelism. This includes the ability to support nested parallelizations (parallel operations inside other parallel operations). Futhark can do this “despite the complexities of efficiently mapping to the flat parallelism supported by hardware, as a great many programs depend on this feature,” say the language’s creators.”

2016-04-26 смърт

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3301

(това го пиша, щото темата ми е интересна, нямам планове да умирам скоро)

Около един post на Meredith Patterson (който също си заслужава да се прочете) попаднах на “A Protocol for Dying на Pieter Hintjens (за чието име съм сигурен, че не мога да произнеса правилно). Това ми припомни малко моите преживявания около невралгията и болницата и най-вече завещанието, и учудването на разни хора, че изобщо съм обмислял ситуацията и всичките варианти.

Смъртта е донякъде интересна тема, най-вече понеже е едно от нещата, за които се говори най-трудно – по най-различни причини хората сме направени така, че темата да ни блокира по най-различни начини, от простото споменаване, през държанието около болни и умиращи, до трудностите при убиването на хора. Това от своя страна води до това да ни свари тотално неподготвени. Протоколът, описан от Hintjens и идеите на Meredith са в общи линии една стъпка в правилната посока, още един факт от живота, с който трябва да свикнем. Интересно ми е дали следващата голяма кампания из медиите ще бъде на подобна тема (както сега се забелязва с най-накрая сериозното включване на хомосексуалността в повечето сериали/филми/книги, което не може да се забележи в тези от преди 30-40 години).

Може би ни трябва годишен Think/talk about death day?:)

Tuesday’s security updates

Post Syndicated from ris original http://lwn.net/Articles/685135/rss

CentOS has updated nspr (C5: two
vulnerabilities), nss (C5: two
vulnerabilities), nspr (C7: two
vulnerabilities), nss (C7: two
vulnerabilities), nss-softokn (C7: two
vulnerabilities), and nss-util (C7: two vulnerabilities).

Fedora has updated ansible1.9 (F23; F22: code
execution), golang (F23; F22: denial of service), gsi-openssh
(F23; F22:
command injection), mingw-poppler (F23; F22: code
execution), mod_nss (F23; F22: invalid handling of +CIPHER operator),
and webkitgtk4 (F22: multiple vulnerabilities).

openSUSE has updated flash-player
(11.4: code execution).

Oracle has updated nss and nspr
(OL5: two vulnerabilities) and nss, nspr,
nss-softokn, and nss-util
(OL7: three vulnerabilities).

Scientific Linux has updated nss,
nspr, nss-softokn, nss-util
(SL7: two vulnerabilities).

SUSE has updated php53
(SLE11-SP4: multiple vulnerabilities), portus (SLEM12: multiple vulnerabilities), and
xen (SLES11-SP2: multiple vulnerabilities).

People Trust Robots, Even When They Don’t Inspire Trust

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/people_trust_ro.html

Interesting research:

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot — which was controlled by a hidden researcher — led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

Our notions of trust depend on all sorts of cues that have nothing to do with actual trustworthiness. I would be interested in seeing where the robot fits in in the continuum of authority figures. Is it trusted more or less than a man in a hazmat suit? A woman in a business suit? An obviously panicky student? How do different looking robots fare?

News article. Research paper.

Raspberry Pi telehealth kit piloted in NHS

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/raspberry-pi-telehealth-kit-piloted-nhs/

I had to spend a couple of nights in hospital last year – the first time I’d been on a hospital ward in about fifteen years. Things have moved on since my last visit: being me, the difference I really noticed was the huge number of computers, often on wheely trolley devices so they could be pushed around the ward, and often only used for one task. There was one at A&E when I came in, used to check NHS numbers and notes; another for paramedics to do a temperature check (this was at the height of the Ebola scare). When my blood was taken for some tests, another mobile computer was hooked up to the vials of blood and the testing hardware right next to my bed, feeding back results to a database; one controlled my drip, another monitored my oxygen levels, breathing, heart rate and so on on the ward. PCs for logging and checking were everywhere. I’m sure the operating room was full of the things too, but I was a bit unconscious at that point, so had stopped counting. (I’m fine now, by the way. Thanks for worrying.)

intensivecare

The huge variety of specialised and generic compute in the hospital gave me something to think about other than myself (which was very, very welcome under the circumstances). Namely, how much all this was costing; and how you could use Raspberry Pis to take some of that cost out. Here’s a study from 2009 about some of the devices used on a ward. That’s a heck of a lot of machines. We know from long experience at Raspberry Pi that specialised embedded hardware is often very, very expensive; manufacturers can put a premium on devices used in specialised environments, and increasingly, people using those devices are swapping them out for something based on Raspberry Pi (about a third of our sales go into embedded compute in industry, for factory automation and similar purposes). And we know that the NHS is financially pressed.

This is a long-winded way of saying that we’re really, really pleased to see a Raspberry Pi being trialled in the NHS.

This is the MediPi. It’s a device for heart patients to use at home to measure health statistics, which means they don’t need daily visits from a medical professional. Telehealth devices like this are usually built on iPads using 3G and Bluetooth with specially commissioned custom software and custom peripherals, which is a really expensive way to do a few simple things.

medipi

MediPi is being trialled this year with heart failure patients in an NHS trust in the south of England. Richard Robinson, the developer, is a a technical integration specialist at the Health and Social Care Information Centre (HSCIC) who has a particular interest in Raspberry Pi. He was shocked to find studies suggesting that devices like this were costing the NHS at least £2,000 a year per patient, making telehealth devices just too expensive for many NHS trusts to be able to use in any numbers. MediPi is much cheaper. The whole kit – that is, the Pi the touchscreen, a blood pressure cuff, a finger oximeter and some diagnostic scales – comes in at £250 (the hope is that building devices like this in bulk will bring prices even lower). And it’s all built on open-source software.

MediPi issues on-screen instructions showing patients how to take and record their measurements. When they hit the “transmit” button MediPi compresses and encrypts the data, and sends it to their clinician. Doctors have asked to be able to send messages to patients using the device, and patients can reply to them. MediPi also includes a heart questionnaire which patients respond to daily using the touch screen.

Richard Robinson says:

We created a secure platform which can message using Spine messaging and also message using any securely enabled network. We have designed it to be patient-friendly, so it has a simple touch-tiled dashboard interface and various help screens, and it’s low cost.

Clinicians don’t want to be overwhelmed with enormous amounts of data so we have developed a concentrator that will take the data and allow clinicians certain views, such as alerts for ‘out of threshold’ values.

My aim for this is that we demonstrate that telehealth is affordable at scale.

We’re really excited about this trial, and we’ll be keeping an eye on how things pan out. We’d love to see more of this sort of cost-reducing innovation in the heath sector; the Raspberry Pi is stable enough and cheap enough to provide it.

The post Raspberry Pi telehealth kit piloted in NHS appeared first on Raspberry Pi.

Finding a new home for Thunderbird

Post Syndicated from corbet original http://lwn.net/Articles/685060/rss

The Mozilla Foundation has (in the guise of Gervase Markham) posted an
update on the process of spinning off the Thunderbird mail client as a
separate project. As part of that, they engaged Simon Phipps to write up
a
survey of possible new homes [PDF]
for the project. “Having
reviewed the destinations listed below together with several others which
were less promising, I believe there are three viable choices for a future
home for the Thunderbird Project; Software Freedom Conservancy, The
Document Foundation and a new deal at the Mozilla Foundation. None of these
three is inherently the best, and it is possible that over time the project
might seek to migrate to a ‘Thunderbird Foundation’ as a permanent home
(although I would not recommend that as the next step).

ЕС: четвъртата индустриална революция #4IR

Post Syndicated from nellyo original https://nellyo.wordpress.com/2016/04/26/4ir/

В изпълнение на Стратегията за създаване на цифров единен пазар на 19 април 2016 Европейската комисия

  • представи група от мерки за подкрепа и свързване на националните инициативи за цифровизиране на всички промишлени сектори и свързаните с тях услуги и за увеличаване на инвестициите чрез стратегически партньорства и мрежи;
  • предлага конкретни мерки за ускоряване на разработването на общи стандарти в приоритетни области, като например комуникационните мрежи от 5-то поколение (5G) или киберсигурността, и
    за модернизиране на обществените услуги;
  • обяви план да създаде европейски изчислителен облак, който ще осигури на 1,7 милиона изследователи и 70 милиона специалисти в областта на науката и технологиите в Европа виртуална среда за съхранение, управление, анализ и повторно използване на голямо количество данни от научни изследвания.

ЕК определя пет приоритетни области: 5G мрежи, компютърни услуги в облак, интернет на нещата, технологии за данни и киберсигурност.

Комисията ще започне в близките месеци изпълнението на 20 мерки, между които:

  •  да се свържат   всички търговски регистри и регистри по несъстоятелност помежду им и с портала за електронно правосъдие, който ще се превърне в инструмент за обслужване на „едно гише“;
  • да се  разработват трансгранични услуги за електронно  здравеопазване, например електронни рецепти и здравни досиета и др.

В документите се използва термин четвърта индустриална революция, обяснена с наличие на интегрирани кибер-физически системи и интернет на нещата, големи данни и изчислителни облаци, системи, базирани на изкуствен интелект.  #4IR

Filed under: Digital, EU Law

Weekly roundup: pixel perfect

Post Syndicated from Eevee original https://eev.ee/dev/2016/04/25/weekly-roundup-pixel-perfect/

April’s theme is finish Runed Awakening.

Mel is out of town, so I’m being constantly harassed by cats.

  • irl: I cleaned the hell out of my room.

  • Runed Awakening: I finally figured out the concept for a decent chunk of the world, and built about half of it. I finally, finally, feel like the world and plot are starting to come together. I doubt I’ll have it feature-complete in only another week, but I’m miles ahead of where I was at the beginning of the month. The game is maybe 40% built, but closer to 80% planned, and that’s been the really hard part.

    I’d previously done two illustrations of items as sort of a proof of concept for embedding images in interactive fiction. A major blocker was coming up with a palette that could work for any item and keep the overall style cohesive. I couldn’t find an appropriate palette and I’m not confident enough with color to design one myself, so the whole idea was put on hold there.

    Last week, a ludum dare game led me to the DB32 palette, one of the few palettes I’ve ever seen that has as many (!) as 32 colors and is intended for general use. I grabbed Aseprite at Twitter’s suggestion (which turns out to default to DB32), and I did some pixel art, kind of on a whim! Twitter pretty well destroyed them, so I posted them in a couple batches on Tumblr: batch 1, batch 2, and a little talking eevee portrait.

    I don’t know if the game will end up with an illustration for every object and room — that would be a lot of pixels and these take me a while — but it would be pretty neat to illustrate the major landmarks and most interesting items.

  • spline/flora: I wrote a quick and dirty chronological archive for Flora. I then did a bunch of witchcraft to replace 250-odd old comic pages with the touched-up versions that appear in the printed book.

  • blog: I wrote about elegance. I also started on another post, which is not yet finished. It’s also terrible, sorry.

Intel releases the Arduino 101 firmware source code

Post Syndicated from ris original http://lwn.net/Articles/685026/rss

Arduino has announced
the release
of the source code for the real-time operating system
(RTOS) powering the Arduino 101 and Genuino 101. “The package
contains the complete BSP (Board Support Package) for the Curie processor
on the 101. It allows you to compile and modify the core OS and the
firmware to manage updates and the bootloader. (Be careful with this one
since flashing the wrong bootloader could brick your board and require a
JTAG programmer to unbrick it).
” (Thanks to Paul Wise)

Sharpen your Skill Set with Apache Spark on the AWS Big Data Blog

Post Syndicated from Andy Werth original https://blogs.aws.amazon.com/bigdata/post/Tx3GKU2NKIF301Y/Sharpen-your-Skill-Set-with-Apache-Spark-on-the-AWS-Big-Data-Blog

The AWS Big Data Blog has a large community of authors who are passionate about Apache Spark and who regularly publish content that helps customers use Spark to build real-world solutions. You’ll see content on a variety of topics, including deep-dives on Spark’s internals, building Spark Streaming applications, creating machine learning pipelines using MLlib, and ways to apply Spark to various real-world use cases. You can learn hands-on by creating distributed applications using code samples from the blog directly against data in Amazon S3, and you can run Spark on Amazon EMR to enable fast experimentation and quick production deployments.

The latest releases of Spark are supported within a few weeks of Apache general availability (Spark 1.6.1 was included in EMR 4.5 last week). Spark on EMR is configured by default to use dynamic allocation of executors to efficiently utilize available resources, it can utilize EMRFS to efficiently query data in Amazon S3, and it can be used with interactive notebooks when you’re also installing Apache Zeppelin on your cluster.

Below are recent posts that focus on Spark:

We hope these posts help you learn more about the Spark ecosystem and demonstrate ways to leverage these technologies on AWS to help you derive value from your data. And with new posts coming out every week, stay tuned for new Spark use cases and examples!

Please let us know in the comments below if you’d like us to cover specific Spark-related topics. If you have questions about Spark on EMR, please email us at [email protected] and we’ll get back to you right away.

AWS Week in Review – April 18, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-18-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday

April 18

Tuesday

April 19

Wednesday

April 20

Thursday

April 21

Friday

April 22

Saturday

April 23

Sunday

April 24

New & Notable Open Source

New SlideShare Presentations

New Customer Success Stories

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

Google Rapid Response (GRR ) – Remote Live Forensics For Incident Response

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/pqSSf3VI9-A/

GRR Rapid Response is an incident response framework focused on remote live forensics. It based on client server architecture, so there’s an agent which is installed on target systems and a Python server infrastructure that can manage and communicate with the agents. There are agents for Windows, Linux and Mac OS X environments. Overview To…

Read the full post at darknet.org.uk

Surviving the Zombie Apocalypse with Serverless Microservices

Post Syndicated from Aaron Kao original https://aws.amazon.com/blogs/compute/surviving-the-zombie-apocalypse-with-serverless-microservices/

Run Apps without the Bite!

by: Kyle Somers – Associate Solutions Architect

Let’s face it, managing servers is a pain! Capacity management and scaling is even worse. Now imagine dedicating your time to SysOps during a zombie apocalypse — barricading the door from flesh eaters with one arm while patching an OS with the other.

This sounds like something straight out of a nightmare. Lucky for you, this doesn’t have to be the case. Over at AWS, we’re making it easier than ever to build and power apps at scale with powerful managed services, so you can focus on your core business – like surviving – while we handle the infrastructure management that helps you do so.

Join the AWS Lambda Signal Corps!

At AWS re:Invent in 2015, we piloted a workshop where participants worked in groups to build a serverless chat application for zombie apocalypse survivors, using Amazon S3, Amazon DynamoDB, Amazon API Gateway, and AWS Lambda. Participants learned about microservices design patterns and best practices. They then extended the functionality of the serverless chat application with various add-on functionalities – such as mobile SMS integration, and zombie motion detection – using additional services like Amazon SNS and Amazon Elasticsearch Service.

Between the widespread interest in serverless architectures and AWS Lambda by our customers, we’ve recognized the excitement around this subject. Therefore, we are happy to announce that we’ll be taking this event on the road in the U.S. and abroad to recruit new developers for the AWS Lambda Signal Corps!

 

Help us save humanity! Learn More and Register Here!

 

Washington, DC | March 10 – Mission Accomplished!

San Francisco, CA @ AWS Loft | March 24 – Mission Accomplished!

New York City, NY @ AWS Loft | April 13 – Mission Accomplished!

London, England @ AWS Loft | April 25

Austin, TX | April 26

Atlanta, GA | May 4

Santa Monica, CA | June 7

Berlin, Germany | July 19

San Francisco, CA @ AWS Loft | August 16

New York City, NY @ AWS Loft | August 18

 

If you’re unable to join us at one of these workshops, that’s OK! In this post, I’ll show you how our survivor chat application incorporates some important microservices design patterns and how you can power your apps in the same way using a serverless architecture.


 

What Are Serverless Architectures?

At AWS, we know that infrastructure management can be challenging. We also understand that customers prefer to focus on delivering value to their business and customers. There’s a lot of undifferentiated heavy lifting to be building and running applications, such as installing software, managing servers, coordinating patch schedules, and scaling to meet demand. Serverless architectures allow you to build and run applications and services without having to manage infrastructure. Your application still runs on servers, but all the server management is done for you by AWS. Serverless architectures can make it easier to build, manage, and scale applications in the cloud by eliminating much of the heavy lifting involved with server management.

Key Benefits of Serverless Architectures

  • No Servers to Manage: There are no servers for you to provision and manage. All the server management is done for you by AWS.
  • Increased Productivity: You can now fully focus your attention on building new features and apps because you are freed from the complexities of server management, allowing you to iterate faster and reduce your development time.
  • Continuous Scaling: Your applications and services automatically scale up and down based on size of the workload.

What Should I Expect to Learn at a Zombie Microservices Workshop?

The workshop content we developed is designed to demonstrate best practices for serverless architectures using AWS. In this post we’ll discuss the following topics:

  • Which services are useful when designing a serverless application on AWS (see below!)
  • Design considerations for messaging, data transformation, and business or app-tier logic when building serverless microservices.
  • Best practices demonstrated in the design of our zombie survivor chat application.
  • Next steps for you to get started building your own serverless microservices!

Several AWS services were used to design our zombie survivor chat application. Each of these services are managed and highly scalable. Let’s take a quick at look at which ones we incorporated in the architecture:

  • AWS Lambda allows you to run your code without provisioning or managing servers. Just upload your code (currently Node.js, Python, or Java) and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Lambda is used to power many use cases, such as application back ends, scheduled administrative tasks, and even big data workloads via integration with other AWS services such as Amazon S3, DynamoDB, Redshift, and Kinesis.
  • Amazon Simple Storage Service (Amazon S3) is our object storage service, which provides developers and IT teams with secure, durable, and scalable storage in the cloud. S3 is used to support a wide variety of use cases and is easy to use with a simple interface for storing and retrieving any amount of data. In the case of our survivor chat application, it can even be used to host static websites with CORS and DNS support.
  • Amazon API Gateway makes it easy to build RESTful APIs for your applications. API Gateway is scalable and simple to set up, allowing you to build integrations with back-end applications, including code running on AWS Lambda, while the service handles the scaling of your API requests.
  • Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

Overview of the Zombie Survivor Chat App

The survivor chat application represents a completely serverless architecture that delivers a baseline chat application (written using AngularJS) to workshop participants upon which additional functionality can be added. In order to deliver this baseline chat application, an AWS CloudFormation template is provided to participants, which spins up the environment in their account. The following diagram represents a high level architecture of the components that are launched automatically:

High-Level Architecture of Survivor Serverless Chat App

  • Amazon S3 bucket is created to store the static web app contents of the chat application.
  • AWS Lambda functions are created to serve as the back-end business logic tier for processing reads/writes of chat messages.
  • API endpoints are created using API Gateway and mapped to Lambda functions. The API Gateway POST method points to a WriteMessages Lambda function. The GET method points to a GetMessages Lambda function.
  • A DynamoDB messages table is provisioned to act as our data store for the messages from the chat application.

Serverless Survivor Chat App Hosted on Amazon S3

With the CloudFormation stack launched and the components built out, the end result is a fully functioning chat app hosted in S3, using API Gateway and Lambda to process requests, and DynamoDB as the persistence for our chat messages.

With this baseline app, participants join in teams to build out additional functionality, including the following:

  • Integration of SMS/MMS via Twilio. Send messages to chat from SMS.
  • Motion sensor detection of nearby zombies with Amazon SNS and Intel® Edison and Grove IoT Starter Kit. AWS provides a shared motion sensor for the workshop, and you consume its messages from SNS.
  • Help-me panic button with IoT.
  • Integration with Slack for messaging from another platform.
  • Typing indicator to see which survivors are typing.
  • Serverless analytics of chat messages using Amazon Elasticsearch Service (Amazon ES).
  • Any other functionality participants can think of!

As a part of the workshop, AWS provides guidance for most of these tasks. With these add-ons completed, the architecture of the chat system begins to look quite a bit more sophisticated, as shown below:

Architecture of Survivor Chat with Additional Add-on Functionality

Architectural Tenants of the Serverless Survivor Chat

For the most part, the design patterns you’d see in a traditional server-yes environment you will also find in a serverless environment. No surprises there. With that said, it never hurts to revisit best practices while learning new ones. So let’s review some key patterns we incorporated in our serverless application.

Decoupling Is Paramount

In the survivor chat application, Lambda functions are serving as our tier for business logic. Since users interact with Lambda at the function level, it serves you well to split up logic into separate functions as much as possible so you can scale the logic tier independently from the source and destinations upon which it serves.

As you’ll see in the architecture diagram in the above section, the application has separate Lambda functions for the chat service, the search service, the indicator service, etc. Decoupling is also incorporated through the use of API Gateway, which exposes our back-end logic via a unified RESTful interface. This model allows us to design our back-end logic with potentially different programming languages, systems, or communications channels, while keeping the requesting endpoints unaware of the implementation. Use this pattern and you won’t cry for help when you need to scale, update, add, or remove pieces of your environment.

Separate Your Data Stores

Treat each data store as an isolated application component of the service it supports. One common pitfall when following microservices architectures is to forget about the data layer. By keeping the data stores specific to the service they support, you can better manage the resources needed at the data layer specifically for that service. This is the true value in microservices.

In the survivor chat application, this practice is illustrated with the Activity and Messages DynamoDB tables. The activity indicator service has its own data store (Activity table) while the chat service has its own (Messages). These tables can scale independently along with their respective services. This scenario also represents a good example of statefuless. The implementation of the talking indicator add-on uses DynamoDB via the Activity table to track state information about which users are talking. Remember, many of the benefits of microservices are lost if the components are still all glued together at the data layer in the end, creating a messy common denominator for scaling.

Leverage Data Transformations up the Stack

When designing a service, data transformation and compatibility are big components. How will you handle inputs from many different clients, users, systems for your service? Will you run different flavors of your environment to correspond with different incoming request standards?  Absolutely not!

With API Gateway, data transformation becomes significantly easier through built-in models and mapping templates. With these features you can build data transformation and mapping logic into the API layer for requests and responses. This results in less work for you since API Gateway is a managed service. In the case of our survivor chat app, AWS Lambda and our survivor chat app require JSON while Twilio likes XML for the SMS integration. This type of transformation can be offloaded to API Gateway, leaving you with a cleaner business tier and one less thing to design around!

Use API Gateway as your interface and Lambda as your common backend implementation. API Gateway uses Apache Velocity Template Language (VTL) and JSONPath for transformation logic. Of course, there is a trade-off to be considered, as a lot of transformation logic could be handled in your business-logic tier (Lambda). But, why manage that yourself in application code when you can transparently handle it in a fully managed service through API Gateway? Here are a few things to keep in mind when handling transformations using API Gateway and Lambda:

  • Transform first; then call your common back-end logic.
  • Use API Gateway VTL transformations first when possible.
  • Use Lambda to preprocess data in ways that VTL can’t.

Using API Gateway VTL for Input/Output Data Transformations

 

Security Through Service Isolation and Least Privilege

As a general recommendation when designing your services, always utilize least privilege and isolate components of your application to provide control over access. In the survivor chat application, a permissions-based model is used via AWS Identity and Access Management (IAM). IAM is integrated in every service on the AWS platform and provides the capability for services and applications to assume roles with strict permission sets to perform their least-privileged access needs. Along with access controls, you should implement audit and access logging to provide the best visibility into your microservices. This is made easy with Amazon CloudWatch Logs and AWS CloudTrail. CloudTrail enables audit capability of API calls made on the platform while CloudWatch Logs enables you to ship custom log data to AWS. Although our implementation of Amazon Elasticsearch in the survivor chat is used for analyzing chat messages, you can easily ship your log data to it and perform analytics on your application. You can incorporate security best practices in the following ways with the survivor chat application:

  • Each Lambda function should have an IAM role to access only the resources it needs. For example, the GetMessages function can read from the Messages table while the WriteMessages function can write to it. But they cannot access the Activities table that is used to track who is typing for the indicator service.
  • Each API Gateway endpoint must have IAM permissions to execute the Lambda function(s) it is tied to. This model ensures that Lambda is only executed from the principle that is allowed to execute it, in this case the API Gateway method that triggers the back end function.
  • DynamoDB requires read/write permissions via IAM, which limits anonymous database activity.
  • Use AWS CloudTrail to audit API activity on the platform and among the various services. This provides traceability, especially to see who is invoking your Lambda functions.
  • Design Lambda functions to publish meaningful outputs, as these are logged to CloudWatch Logs on your behalf.

FYI, in our application, we allow anonymous access to the chat API Gateway endpoints. We want to encourage all survivors to plug into the service without prior registration and start communicating. We’ve assumed zombies aren’t intelligent enough to hack into our communication channels. Until the apocalypse, though, stay true to API keys and authorization with signatures, which API Gateway supports!

Don’t Abandon Dev/Test

When developing with microservices, you can still leverage separate development and test environments as a part of the deployment lifecycle. AWS provides several features to help you continue building apps along the same trajectory as before, including these:

  • Lambda function versioning and aliases: Use these features to version your functions based on the stages of deployment such as development, testing, staging, pre-production, etc. Or perhaps make changes to an existing Lambda function in production without downtime.
  • Lambda service blueprints: Lambda comes with dozens of blueprints to get you started with prewritten code that you can use as a skeleton, or a fully functioning solution, to complete your serverless back end. These include blueprints with hooks into Slack, S3, DynamoDB, and more.
  • API Gateway deployment stages: Similar to Lambda versioning, this feature lets you configure separate API stages, along with unique stage variables and deployment versions within each stage. This allows you to test your API with the same or different back ends while it progresses through changes that you make at the API layer.
  • Mock Integrations with API Gateway: Configure dummy responses that developers can use to test their code while the true implementation of your API is being developed. Mock integrations make it faster to iterate through the API portion of a development lifecycle by streamlining pieces that used to be very sequential/waterfall.

Using Mock Integrations with API Gateway

Stay Tuned for Updates!

Now that you’ve got the necessary best practices to design your microservices, do you have what it takes to fight against the zombie hoard? The serverless options we explored are ready for you to get started with and the survivors are counting on you!

Be sure to keep an eye on the AWS GitHub repo. Although I didn’t cover each component of the survivor chat app in this post, we’ll be deploying this workshop and code soon for you to launch on your own! Keep an eye out for Zombie Workshops coming to your city, or nominate your city for a workshop here.

For more information on how you can get started with serverless architectures on AWS, refer to the following resources:

Whitepaper – AWS Serverless Multi-Tier Architectures

Reference Architectures and Sample Code

*Special thanks to my colleagues Ben Snively, Curtis Bray, Dean Bryen, Warren Santner, and Aaron Kao at AWS. They were instrumental to our team developing the content referenced in this post.

AWS CodeDeploy Deployments with HashiCorp Consul

Post Syndicated from George Huang original https://aws.amazon.com/blogs/devops/aws-codedeploy-deployments-with-hashicorp-consul/

Learn how to use AWS CodeDeploy and HashiCorp Consul together for your application deployments. 

AWS CodeDeploy automates code deployments to Amazon Elastic Compute Cloud (Amazon EC2) and on-premises servers. HashiCorp Consul is an open-source tool providing service discovery and orchestration for modern applications. 

Learn how to get started by visiting the guest post on the AWS Partner Network Blog. You can see a full list of CodeDeploy product integrations by visiting here

AWS CodeDeploy Deployments with HashiCorp Consul

Post Syndicated from George Huang original http://blogs.aws.amazon.com/application-management/post/Tx1MURIM5X45IKX/AWS-CodeDeploy-Deployments-with-HashiCorp-Consul

Learn how to use AWS CodeDeploy and HashiCorp Consul together for your application deployments. 

AWS CodeDeploy automates code deployments to Amazon Elastic Compute Cloud (Amazon EC2) and on-premises servers. HashiCorp Consul is an open-source tool providing service discovery and orchestration for modern applications. 

Learn how to get started by visiting the guest post on the AWS Partner Network Blog. You can see a full list of CodeDeploy product integrations by visiting here

Security advisories for Monday

Post Syndicated from ris original http://lwn.net/Articles/684999/rss

Arch Linux has updated pgpdump
(denial of service), samba (multiple
vulnerabilities), squid (multiple
vulnerabilities), and thunderbird (two vulnerabilities).

Debian has updated imlib2 (multiple vulnerabilities) and libgd2 (code execution).

Fedora has updated java-1.8.0-openjdk (F23: multiple
vulnerabilities), openssh (F23: privilege
escalation), parallel (F23; F22: file overwrites),
python-tgcaptcha2 (F23; F22: reusable captchas), thunderbird (F23: multiple vulnerabilities),
w3m (F23: denial of service), and webkitgtk4 (F23: multiple vulnerabilities).

Mageia has updated java-1.8.0-openjdk (multiple vulnerabilities), libcryptopp (information disclosure), squid (denial of service), varnish (access control bypass), and vtun (denial of service).

openSUSE has updated Chromium (13.2; 13.1:
multiple vulnerabilities) and clamav
(Leap42.1: database refresh).

Red Hat has updated nss, nspr
(RHEL5: two vulnerabilities) and nss, nspr,
nss-softokn, nss-util
(RHEL7: two vulnerabilities).

Scientific Linux has updated nss,
nspr
(SL5: two vulnerabilities).

SUSE has updated yast2-users
(SLE12-SP1: empty passwords fields in /etc/shadow).

Ubuntu has updated mysql-5.7
(16.04: multiple vulnerabilities).

Storage Pod 6.0: Building a 60 Drive 480TB Storage Server

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/open-source-data-storage-server/

Storage Pod 6.0
Storage Pod 6.0 deploys 60 off-the-shelf hard drives in a 4U chassis to lower the cost of our latest data storage server to just $0.036/GB. That’s 22 percent less than our Storage Pod 5.0 storage server that used 45 drives to store data for $0.044/GB. The Storage Pod 6.0 hardware design is, as always, open source so we’ve included the blueprints, STEP files, wiring diagrams, build instructions and a parts list so you can build your very own Storage Pod. Your cost may be a bit more, but it is possible for you to build a 4U server with 480TB of data storage for less than a nickel ($0.05) a gigabyte – read on.

A little Storage Pod history

In 2009, Storage Pod 1.0 changed the landscape in data storage servers by delivering 67.5TB of storage in a 4U box for just $0.11/GB — that was up to 10 times lower than comparable systems on the market at the time. We also open-sourced the hardware design of Storage Pod 1.0 and companies, universities, and even weekend hobbyist started building their own Storage Pods.

Over the years we introduced updates to the Storage Pod design, driving down the cost while improving the reliability and durability with each iteration. Storage Pod 5.0 marked our initial use of the Agile manufacturing and design methodology which helped identify and squeeze out more costs, driving our cost per GB of storage below $0.05. Agile also enabled us to manage a rapid design prototyping process that allowed us stretch the Storage Pod chassis to include 60 drives then produce 2-D and 3-D specifications, a build book, a bill of materials and update our manufacturing and assembly processes for the new design – Storage Pod 6.0. All of this in about 6 months.

What’s new in Storage Pod 6.0

60 drive storage server

What’s new is 60 drives in a 4U chassis. That’s a 33 percent increase to the storage density in the same rack space. Using 4TB drives in a 60-drive Storage Pod increases the amount of storage in a standard 40U rack from 1.8 to 2.4 Petabytes. Of course, by using 8TB drives you’d get a 480TB data storage server in 4U server and 4.8 Petabytes in a standard rack.

When looking at what’s new in Storage Pod 6.0 it would easy to say it has 60 drives and stop there. After all, the Motherboard, CPU, memory, SATA cards, and backplanes we use didn’t change from 5.0. But expanding to 60 drives created all kinds of things to consider, for example:

  • How long do you make the chassis before it is too long for the rack?
  • Will we need more cooling?
  • Will the power supplies need to be upgraded?
  • Will the SATA cables be too long? The maximum spec’d length is 1 meter.
  • Can the SATA cards keep up with the 15 more drives? Or will we need to upgrade them?
  • Will the CPU and the motherboard be able to handle the additional data load of 15 more drives?
  • Will more or faster memory be required?
  • Will the overall Storage Pod be correctly balanced between CPU, memory, storage and other components so that nothing is over/under-spec’ed?
  • What hard drives will work with this configuration? Would we have to use enterprise drives? Just kidding!

Rapidly iterating to the right design

As part of the prototyping effort we built multiple configurations and Backblaze Labs put each configuration through its paces. To do this we assembled a Backblaze Vault with 20 prototype Storage Pods in three different configurations. Since each Storage Pod in a Backblaze Vault is expected to perform similarly, we monitored and detected those Storage Pods that were lagging as well as those that were “bored”. By doing this were were able to determine that most of the components in Storage Pod 6.0 did not need to be upgraded to achieve optimal performanace in Backblaze Vaults utlizing 60 drive Storage Pods.

We did make some changes to Storage Pod 6.0 however:

  • Increased the chassis by 5 ½” from 28 1/16” to 33 9/16” in length. Server racks are typically 29” in depth, more on that later.
  • Increased the length of the backplane tray to support 12 backplanes.
  • Added 1 additional drive bracket to handle another row of 15 drives.
  • Added 3 more backplanes and 1 more SATA card.
  • Added 3 more SATA cables.
  • Changed the routing to of the SATA-3 cables to stay within the 1-meter length spec.
  • Updated the pigtail cable design so we could power the three additional backplanes.
  • Changed the routing of the power cables on the backplane tray.
  • Changed the on/off switch retiring the ele-302 and replacing it with the Chill-22.
  • Increased the length of the lid over the drive bay 22 7/8”.

That last item, increasing the length of the drive bay lid, led to a redesign of both lids. Why?

Lids and Tabs

The lid from Storage Pod 5.0 (on the left above) proved to be difficult to remove when it was stretched another 4+ inches. The tabs didn’t provide enough leverage to easily open the longer drive lid. As a consequence Storage Pod 6.0 has a new design (shown on the right above) which provides much better leverage. The design in the middle was one of the prototype designs we tried, but in the end the “flame” kept catching the fingers of the ops folks when they opened or closed the lid.

Too long for the server rack?

The 6.0 chassis is 33 9/16” in length and 35 1/16” with the lids on. A rack is typically 29” in depth, leaving 4+ inches of Storage Pod chassis “hanging out.” We decided to keep the front (Backblaze logo side) aligned to the front of the rack and let the excess hang off the back in the warm aisle of the datacenter. A majority of a pod’s weight is in the front (60 drives!) so the rails support this weight. The overhang is on the back side of the rack, but there’s plenty of room between the rows of racks, so there’s no issue with space. We’re pointing out the overhang so if you end up building your own Storage Pod 6.0 server, you’ll leave enough space behind, or in front, of your rack for the overhang.

The cost in dollars

There are actually three different prices for a Storage Pod. Below are the costs of each of these scenarios to build a 180TB Storage Pod 6.0 storage server with 4TB hard drives:

How BuiltTotal CostDescription
Backblaze$8,733.73The cost for Backblaze given that we purchase 500+ Storage Pods and 20,000+ hard drives per year. This includes materials, assembly, and testing.
You Build It$10,398.57The cost for you to build one Storage Pod 6.0 server by buying the parts and assembling it yourself.
You Buy It$12,849.40The cost for you to purchase one already assembled Storage Pod 6.0 server from a third-party supplier and then purchase and install 4TB hard drives yourself.
These prices do not include packaging, shipping, taxes, VAT, etc.

Since we increased the number of drives from 45 to 60, comparing the total cost of Storage Pod 6.0 to previous the 45-drive versions isn’t appropriate. Instead we can compare them using the “Cost per GB” of storage.

The Cost per GB of storage

Using the Backblaze cost for comparison, below is the Cost per GB of building the different Storage Pod versions.

Storage Pod version

As you can see in the table, the cost in actual dollars increased by $760 with Storage Pod 6.0, but the Cost per GB decreased nearly a penny ($0.008) given the increased number of drives and some chassis design optimizations.

Saving $0.008 per GB may not seem very innovative, but think about what happens when that trivial amount is multiplied across the hundreds of Petabytes of data our B2 Cloud Storage service will store over the coming months and years. A little innovation goes a long way.

Building your own Storage Pod 6.0 server

You can build your own Storage Pod. Here’s what you need to get started:

Chassis – We’ve provided all the drawings you should need to build (or to have built) your own chassis. We’ve had multiple metal bending shops use these files to make a Storage Pod chassis. You get to pick the color.

Parts – In Appendix A we’ve listed all the parts you’ll need for a Storage Pod. Most of the parts can be purchased online via Amazon, Newegg, etc. As noted on the parts list, some parts are purchased either through a distributor or from the contract assemblers.

Wiring – You can purchase the power wiring harness and pigtails as noted on the parts list, but you can also build your own. Whether you build or buy, you’ll want to download the instructions on how to route the cables in the backplane tray.

Build Book – Once you’ve gathered all the parts, you’ll need the Build Book for step-by-step assembly instructions.

As a reminder, Backblaze does not sell Storage Pods, and the design is open source, so we don’t provide support or warranty for people who choose to build their own Storage Pod. That said, if you do build your own, we’d like to hear from you.

Building a 480TB Storage Pod for less than a $0.05 per GB

We’ve used 4TB drives in this post for consistency, but we have in fact built Storage Pods with 5-, 6- and even 8-TB drives. If you are building a Storage Pod 6.0 storage server, you can certainly use higher capacity drives. To make it easy, the chart below is your estimated cost if you were to build your own Storage Pod using the drives noted. We used the lowest “Street Price” from Amazon or Newegg for the price of the 60 hard drives. The list is sorted by the Cost per GB (lowest to highest). The (*) indicates we use this drive model in our datacenter.
Storage Pod Cost per GB
As you can see there are multiple drive models and capacities you can use to achieve a Cost per GB of $0.05 or less. Of course we aren’t counting your sweat-equity in building a Storage Pod, nor do we include the software you are planning to run. If you are looking for capacity, think about using the Seagate 8TB drives to get nearly a half a petabyte of storage in a 4U footprint (albeit with a 4” overhang) for just $0.047 a GB. Total cost: $22,600.

What about SMR drives?

Depending on your particular needs, you might consider using SMR hard drives. An SMR drive stores data more densely on each disk platter surface by “overlapping” tracks of data. This lowers the cost to store data. The downside is that when data is deleted, the newly freed space can be extremely slow to reuse. As such SMR drives are generally used for archiving duties where data is written sequentially to a drive with few, and preferably no, deletions. If this type of capability fits your application, you will find SMR hard drives to very inexpensive. For example, a Seagate 8TB Archive drive (model: ST8000AS0002) is $214.99, making the total cost for a 480TB Storage Pod 6.0 storage server only $16,364.07 or a very impressive $0.034 per GB. By the way, if you’re looking for off-site data archive storage, Backblaze B2 will store your data for just $0.005/GB/month.

Buying a Storage Pod

Backblaze does not sell Storage Pods or parts. If you are interested in buying a Storage Pod 6.0 storage server (without drives), you can check out the folks at Backuppods. They have partnered with Evolve Manufacturing to deliver Backblaze-inspired Storage Pods. Evolve Manufacturing is the contract manufacturer used by Backblaze to manufacture and assemble Storage Pod versions 4.5, 5.0 and now 6.0. Backuppods.com offers a fully assembled and tested Storage Pod 6.0 server (less drives) for $5,950.00 plus shipping, handling and tax. They also sell older Storage Pod versions. Please check out their website for the models and configurations they are currently offering.

Appendix A: Storage Pod 6.0 Parts List

Below is the list of parts you’ll need to build your own Storage Pod 6.0. The prices listed are “street” prices. You should be able to find these items online or from the manufacturer in quantities sufficient to build one Storage Pod. Good luck and happy building.

Item
Qty
Price
Total
Notes
4U Custom Chassis
Includes case, supports, trays, etc.
1
$995.00
$995.00
1
Power Supply
EVGA Supernova NEX750G
2
$119.90
$239.98
On/Off Switch & Cable
Primochill 120-G1-0750-XR (Chill-22)
1
$14.95
$14.95
Case Fan
FAN AXIAL 120X25MM VAPO 12VDC
3
$10.60
$31.80
Dampener Kits
Power Supply Vibration Dampener
2
$4.45
$8.90
Soft Fan Mount
AFM03B (2 tab ends)
12
$0.42
$4.99
Motherboard
Supermicro MBD-X9SRH-7TF-O (MicroATX)
1
$539.50
$539.50
CPU Fan
DYNATRON R13 1U Server CPU FAN
1
$45.71
$45.71
CPU
Intel XEON E5 -1620 V2 (Quad Core)
1
$343.94
$343.94
8GB RAM
PC3-12800 DDR3-1600MHz 240-Pin
4
$89.49
$357.96
Port Multiplier Backplanes
5 Port Backplane (Marvell 9715 chipset)
12
$45.68
$548.10
2, 1
SATA III Card
4-post PCIe Express (Marvell 9235 chipset)
3
$57.10
$171.30
2, 1
SATA III Cable
SATA cables RA-to-STR 1M locking
12
$3.33
$39.90
3, 1
Cable Harness — PSU1
24-pin — Backblaze to Pigtail
1
$33.00
$33.00
1
Cable Harness — PSU2
20-pin — Backblaze to Pigtail
1
$31.84
$31.84
1
Cable Pigtail
24-pin — EVGA NEX750G Connector
2
$16.43
$16.43
1
Screw: 6-32 X 1/4 Phillips PAN ZPS
12
$0.015
$1.83
4
Screw: 4-40 X 5/16 Phillips PAN ZPS ROHS
60
$0.015
$1.20
4
Screw: 6-32 X 1/4 Phillips 100D Flat ZPS
39
$0.20
$7.76
4
Screw: M3 X 5MM Long Phillips, HD
4
$0.95
$3.81
Standoff: M3 X 5MM Long Hex, SS
4
$0.69
$2.74
Foam strip for fan plate — 1/2″ x 17″ x 3/4″
1
$0.55
$0.55
Cable Tie, 8.3″ x 0.225″
4
$0.25
$1.00
Cable Tie, 4″ length
2
$0.03
$0.06
Plastic Drive Guides
120
$0.25
$30.00
1
Label,Serial-Model,Transducer, Blnk
30
$0.20
$6.00
Total
$3,494.67

NOTES:

  • May be able to be purchased from backuppods.com, price may vary.
  • Sunrich and CFI make the recommended backplanes and Sunrich and Syba make the recommended SATA Cards.
  • Nippon Labs makes the recommended SATA cables, but others may work.
  • Sold in packages of 100, used 100 package price for Extended Cost.

 

The post Storage Pod 6.0: Building a 60 Drive 480TB Storage Server appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

Защо не ми харесва идеята на Горанов да превърне индивидуалните партиди в общи

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/XQEkQbMijd4/blog-post_25.html

Защо не ми харесват предложените промени на Горанов за вторият пенсионен стълб (http://www.dnevnik.bg/biznes/finansi/2016/04/25/2749044_goranov_za_vtorata_pensiia_moje_da_se_naloji_decata_da/).
Държавната пенсия (първи стълб) се формира по солидарен модел. Сегашните работещи правят плащания в общ пул, от който се плащат пенсиите на сегашните пенсионери. Често пенсионерите разсъждават грешно, че получават пенсията си, от парите, които са си внасяли когато са работили. Не, те тогава са плащали пенсиите на тогавашните пенсионери. И сега получават пенсията си от сегашните работещи. Когато приходите не достигат, държавата доплаща (тоест преразпределят се пари от другите данъци на сегашните работещи и се допълва пенсията). Естествено, в рамките на възможностите на държавата, които са пропорционални на броят на работещите (когато работещите намаляват, няма как този модел да гарантира високи пенсии, те също намаляват). Това е класическият солидарен модел. Той не гарантира висока пенсия. Той не гарантира пенсията ти да е пропорционална на вноската ти, докато си бил работещ. Той гарантира само преразпределение на малък дял от текущият пул от постъпващи пари, съгласно твоята заслуга, така както тя е оценена към момента от държавата. Технически твоите деца (и другите деца) ти плащат пенсията.
И тъй като държавата не е добър управител на пари (обича повече да харчи, отколкото да изкарва), у нас бе въведен втори пенсионен стълб по холандски модел. Част от вноската на гражданите отива в частни пенсионни фондове, които инвестират парите на гражданите, под контрола на КФН и по идея трябва да докарат по-голяма доходност. Пенсията от тези фондове се очаква в добри времена да нараства по-бързо от държавната, но си носи и риска напълно да бъде загубена (ако фонда фалира от грешно управление, и не упражнен контрол – виж КФН).
Не се предполага, втората пенсия да е до живот. Ако парите в партидата привършат, просто спираш да я получаваш. Пенсията от него, не е и гарантирана (виж фалит на фонда). Нито има гарантиран размер, по висок от този на вноските, които си правил. Тоест вторият стълб е рисково вложение. За това и гражданите трябва да си избират фондовете умно, и да знаят че при грешен избор, носят повишен риск.
Ползите обаче са – при добър избор, пенсията вероятно ще расте повече от държавната (освен ако няма държавен популизъм, който да преразпределя данъци с по-голяма скорост на нарастване, отколкото е икономическият растеж на държавата). Пенсията е индивидуална, тоест получаваш колкото си вложил. Ако си вложил малко, получаваш малко, ако си вложил много, получаваш много (все едно си си държал парите в банка на много дългосрочен влог). Ако парите ти свършат, получаваш нищо (остава ти само гарантираният държавен стълб), ако починеш преди парите ти да свършат, наследниците ти ги получават (както би станало и с дългосрочен влог). Ако икономиката се е развивала добре, пенсионният фонд е инвестирал умно, ще получиш лихва по-голяма отколкото би получил в банка, съизмерима и дори по-висока от икономическият растеж на държавата, повече отколкото при честно преразпределение и равни условия, държавата би могла да ти осигури (и се доказва много лесно математически).
Възможно е, фондовете да управляват парите ти лошо. Да ги инвестират в яхти или собственост на собствениците си (с което собствениците да си купуват за себе си неща с твоите пари). Тогава ти губиш, защото част от потенциалният ти растеж на вложението ти, се е превърнал в частна собственост на трети страни, на които си се доверил да управляват парите ти. Тук обаче, това е проблем, който се предполага да се проследява и проконтролира от КФН. Ако ти като гражданин имаш съмнения, имаш правото да си смениш фонда (на всеки 2г). Колкото по-рано, толкова по-добре.
Сега обаче идва идеята за реформата на Горанов.
Той се притеснява, че:
       КФН не си е вършила работата, и за това много фондове са кухи и нямат никакви пари, или парите им са в собственост на собствениците, или не са ликвидни (в хотел, който не може да се продаде за да се плащат пенсии). Мавродиев обаче е невинен
       Горанов се притеснява от това, че някой граждани може да се оплачат, че ще получават малка пенсия от вторият стълб, защото (1) са влагали малко пари (за това са си виновни те но нека и припомним, че за това е виновна и държавата, защото не позволи увеличаването на вноската към частните фондове, според първоначалният план и те никога не получиха парите, които се очакваше, че е трябвало да получат) и защото фондовете не са управлявали парите достатъчно добре (2) понеже КФН не ги е надзиравала добре (много е важно да се отбележи, че Мавродиев е невинен!)
       Горанов се притеснява, че някой граждани ще бъдат недоволни, задето след известно време няма да получават пари от пенсията си по вторият стълб (а само от първият), понеже живеят по дълго отколкото пенсионният фонд е предвидил
       Горанов се и притеснява, че пенсиите от първият стълб (държавните) са достигнали критичното число, като държавен разход, което държавата може да си позволи, и дефицита на НОИ технически само нараства. Последните реформи (на Калфин) създават само временно намаляване на дефицита, но е лесно да се види как той никога няма да стане нула и от един момент отново ще продължи да нараства
Предложението на Горанов за решение на притесненията му е, парите натрупани от частните пенсионни фондове да бъдат обединени в общ пул, от който да се изплащат пенсиите на всички.
Тоест партидите престават да бъдат индивидуални, а стават общи. Така:
       Ако КФН не си е свършила работата (Мавродиев е невинен!), гражданите може и да не разберат веднага, защото ако поне един фонд си е свършил работата, то той ще плаща и за тези, които не са. Тоест КФН ще продължава по този добър начин, а фондовете инвестиращи в собственост на собствениците си, също, тъй като ликвидният проблем ще бъде прикрит от общият пул (също като проблемите с ликвидността на НОИ или фонда за защита на влоговете, които купуват държавни облигации и технически нямат ликвидни пари)
       Този, който живее по-малко, ще плаща пенсията на тези, които живеят по-дълго
       Тъй като няма да има индивидуални партиди (влогове) по които да се определя, кой колко пари е натрупал, пенсията няма да се плаща според това какво си вложил, а ще се плаща според това какво някой (например невинният Мавродиев в консултация с Министерството на Финансите и Горанов) е преценил че заслужаваш (и колко е преценил, че си заслужил да живееш). Технически, вторият стълб се превръща в НОИ и започва да копира неговият модел
       Частните пенсионни фондове просто се превръщат технически в инвестиционни посредници, без конкуренция по между им (всичко което направят отива в общият пул) и без своя печалба. Тоест техният интерес се принуждава (не че до сега не е бил, виж КФН и невинният Мавродиев) да бъде единствено и само на кого да бутнат пари, а не колко да изкарат за вложителите и за себе си.
И тъй като форсирайки фондовете да плащат пенсии като НОИ, по заслуга (а не стойност с натрупване), то вторият стълб се превръща във второ НОИ, с държавно управление на разхода (КФН и наредби от МФ), но с частни инвестиционни посредници, които поради това, че са освободени напълно от отговорност, стават заинтересовани да правят далавери, и след известен бум на далаверите, идва третият вероятно интерес на Горанов – просто да национализира НОИ 2 при натрупаното обществено недоволство и да го събере с НОИ 1 и временно да си реши проблема с нарастващият дефицит, нещо което Дянков опита веднъж, и Горанов опита веднъж (вадят им очите парите натрупани в частните сметки в банките, и индивидуалните партиди в пенсионните фондове).
Това предложение вероятно се прави тактически, преди да минат стрес тестовете на фондовете, за да мине по-лесно покрай стреса, който ще се появи след това (когато се обявят стресиращите резултати). Защото всички очакваме много проблеми и дори можем и да залагаме, къде точно ще се появят те. Важно е да отбележа, че въобще не искам да намекна каквато и да е вина на Мавродиев. Мавродиев е невинен!
Ясно е, че трябват отговори на следните въпроси:
       Какво да се прави, ако даден фонд е без реална ликвидност? Безспорно е, че трябва да има някаква форма на защитен механизъм като при банките. Но също така да се знае, че той само ще гарантира определен минимум, който в основата си е гарантиран от пенсията по първи стълб. Това че КФН не си върши работата, или законодателят не и е дал достатъчно механизми за това, не значи, че механизма и идеята да се комбинира споделена (първи стълб) и индивидуална партида (втори стълб) е лош. Трябва да се решават конкретните проблеми конкретно, а не да се прави фундаментална промяна с дву (или едно) стъпкова национализация, която временно да прикрие проблемите. Разбирам страха на Горанов от потенциално КТБ на пенсионните фондове, и новите дългове за бюджета, които то може да донесе. Но държавата и специално Горанов са участвали в избора на ръководството и в създаването на правилата на КФН и трябва да могат да си носят политическата отговорност. А разхода, ще го преглътнем като КТБ, стига да знаем, че причините за проявата му са поправени и няма да се появяват пак. Горанов, Мавродиев е невинен!
       Какво да се прави, за хората които получават по-малко отколкото са очаквали? Аз знам нещо за себе си, колкото и да получавам е все по-малко отколкото очаквам. За да бъдат активни хората в управлението на парите си, и полагането на контрол, и да имат коректни очаквания, те трябва да носят известен риск но и известен механизъм да упражняват контрол. За момента държавата се е погрижила да носят само риск. Но вместо да им даде механизъм за контрол, тя им предлага размазване и разпределение на риска между всички
       Какво да се дава на хората, които живеят по-дълго отколкото фонда е очаквал – това е проблем и на фонда, но и на държавата (определяща пенсионните правила). Логично е, фонда да плаща, колкото пари има по индивидуалната партида. Ако те свършат, защото някой живее по-дълго, държавата (която е виновна в определянето на пенсионните правила) би трябвало да носи отговорност, и тя трябва да награждава живеещите по-дълго, с пенсия дълголетие (идваща да речем след 70 годишна възраст) чиято идея ще е да компенсира (частично) пенсията изчезнала от частният пенсионен фонд.

И да не забравяме, Мавродиев е невинен!

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close