Fact-checking the truth on TCO for running Windows workloads in the cloud

Post Syndicated from Sandy Carter original https://aws.amazon.com/blogs/compute/fact-checking-the-truth-on-tco-for-running-windows-workloads-in-the-cloud/

We’ve been talking to many customers over the last 3–4 months who are concerned about the total cost of ownership (TCO) for running Microsoft Windows workloads in the cloud.

For example, Infor is a global leader in enterprise resource planning (ERP) for manufacturing, healthcare, and retail. They’ve moved thousands of their existing Microsoft SQL Server workloads to Amazon EC2 instances. As a result, they are saving 75% on monthly backup costs. With these tremendous cost savings, Infor can now focus their resources on exponential business growth, with initiatives around AI and optimization.

We also love the story of Just Eat, a UK-based company that has migrated their SQL Server workloads to AWS. They’re now focused on using that data to train Alexa skills for ordering take out!

Here are three fact checks that you should review to ensure that you are getting the best TCO!

Fact check #1: Microsoft’s cost comparisons are misleading for running Windows workloads in the cloud

Customers have shared with us over and over why they continue to trust AWS to run their most important Windows workloads. Still, some of those customers tell us that Microsoft claims Azure is cheaper for running Windows workloads. But can this really be true?

When looking at Microsoft’s cost comparisons, we can see that their analysis is misleading because of some false assumptions. For example, Microsoft only compares the costs of the compute service and licenses. But every workload needs storage and networking! By leaving out these necessary services, Microsoft is not comparing real-world workloads.

The comparison also assumes that the AWS and Azure offerings are at a performance parity, which isn’t true. While the comparison uses equivalent virtual instance configuration, Microsoft ignores the significantly higher performance of AWS compute. We hear that customers must run between two to three times as many Azure instances to get the same performance as they do on AWS (see Fact check #2).

And the list goes on. Microsoft’s analysis only looks at 2008 versions of Windows Server or SQL Server. Then, it adds in the cost for expensive Extended Support to the AWS calculation (extended support costs 75% of the current license cost per year). This addition makes up more than half of the claimed cost difference.

Microsoft assumes that in the next three years, customers won’t move off software that’s more than 10 years old. What we hear from customers is that they plan to use their upgrade rights from Software Assurance (SA) to move to newer versions, such as SQL Server 2016. They’ll use our new automated upgrade tool to eliminate the need for these expensive fees.

Finally, the comparison assumes the use of Azure Hybrid Benefit to reduce the cost of the Azure virtual instance. It does not factor in the cost of the required Microsoft SA on each license. The required SA adds significant cost to the Microsoft side of the example and further demonstrates that their example was misleading.

These assumptions result in a comparison that does not factor in all the costs needed to run SQL Server in Azure. Nor does it account for the performance gains that you get from running on AWS.

At AWS, we are committed to helping you make the most efficient use of cloud resources and lower your Microsoft bill. It appears that Microsoft is focused on keeping those line items flat or growing them over time by adding more and more licensing complexity.

Fact check #2: Price-performance matters to your business for running SQL Server in the cloud

When deciding what cloud is best for your Windows workloads, you should consider both price and performance to find the right operational combination to maximize value for your business. It is also important to think about the future and not make important platform decisions based on technology that was designed before the rise of the cloud.

We know that getting better application performance for your apps is critical to your customers’ satisfaction. In fact, excellent application performance leads to 39% higher customer satisfaction. For more information, see the Netmagic Solutions whitepaper, Application Performance Management: How End-User Experience Affects Your Bottom Line. Poor performance may lead to damaged reputations or even worse, customer attrition.

To make sure that you have the best possible experience for your customers, we focused on pushing the boundaries around performance.

With that in mind, here are some comparisons done between Azure and AWS:

  • DB Best, an enterprise database consulting company, wrote two blog posts—one each for Azure and AWS. They showed how to get the best price-performance ratio for running current versions of SQL Server in the cloud.
  • ZK Research took these posts and compared the results from DB Best to show an apples-to-apples comparison. The testing from DB Best found that SQL Server on AWS consistently shows a 2–3x better performance compared to Azure, using a TPC-C-like benchmark tool called HammerDB.
  • ZK Research then used the DB Best data to calculate the comparison cost for running 1 billion transactions per month. ZK Research found that SQL Server running on Azure would have twice the cost than when running on AWS, when comparing price-performance, including storage, compute, and networking.

As you can see from this data, running on AWS gives you the best price-performance ratio for Windows workloads.

Fact check #3: What does an optimized TCO for Windows workloads in the cloud look like?

When assessing which cloud to run your Windows workloads, your comparison must go well beyond just the compute and support costs. Look at the TCO of your workloads and include everything necessary to run and support these workloads, like storage, networking, and the cost benefits of better reliability. Then, see how you can use the cloud to lower your overall TCO.

So how do you lower your costs to run Windows workloads like Windows Server and SQL Server in the cloud? Optimize those workloads for the scalability and flexibility of cloud. When companies plan cloud migrations on their own, they often use a spreadsheet inventory of their on-premises servers and try to map them, one-to-one, to new cloud-based servers. But these inventories don’t account for the capabilities of cloud-based systems.

On-premises servers are not optimized, with 84% of workloads currently over-provisioned. Many Windows and SQL Server 2008 workloads are running on older, slower server hardware. By sizing your workloads for performance and capability, not by physical servers, you can optimize your cloud migration.

Reducing the number of licenses that you use, both by server and core counts, can also drive significant cost savings. See which on-premises workloads are fault-tolerant, and then use Amazon EC2 Spot Instances to save up to 90% on your compute costs vs. On-Demand pricing.

To get the most out of moving your Windows workloads into the cloud, review and optimize each workload to take best advantage of cloud scalability and flexibility. Our customers have made the most efficient use of cloud resources by working with assessment partners like Movere or TSO Logic, which is now part of AWS.

By running detailed assessments of their environments before migration, customers can yield up to 36% savings using AWS over three years. Customer with optimized environments often find that their AWS solutions are price-competitive with Microsoft even before taking in account the AWS price-performance advantage.

In addition, you can optimize utilization with AWS Trusted Advisor. In fact, over the last couple years, we’ve used AWS Trusted Advisor to tell customers how to spend less money with us, leading to hundreds of millions of dollars in savings for our customers every year.

Why run Windows Server and SQL Server anywhere else but AWS?

For the past 10 years, many companies, such as Adobe and Salesforce, have trusted AWS to run Windows-based workloads such as Windows Server and SQL Server. Many customers tell us the reasons they choose AWS is due to TCO and reliability. Customers have been able to run their Windows workloads with lower costs and higher performance than on any other cloud. To learn more about our story and why customers trust AWS for their Windows workloads, check out Windows on AWS.

After the workloads are optimized for cloud, you can save even more money by efficiently managing your Window Server and SQL Server licenses with AWS License Manager. By the way, License Manager lets you manage on-premises and in the cloud, as well as other software like SAP, Oracle, and IBM.

Dedicated hosts allow customers to bring Windows Server and SQL Server licenses with or without Software Assurance. Licenses without Software Assurance cannot be taken to Azure. Furthermore, Dedicated Hosts allow customers to license Windows Server at the physical level and achieve a greater number of instances at a lower price than they would get through Azure Hybrid Use Benefits.


The answer is clear: AWS is the best cloud to run your Windows workloads. AWS offers the best experience for Windows workloads in the cloud, which is why we run almost 2x the number of Windows workloads compared to the next largest cloud.

Our customers have found that migrating their Windows workloads to AWS can yield significant savings and better performance. Customers like Sysco, Hess, Sony DADC New Media Solutions, Ancestry, and Expedia have chosen AWS to upgrade, migrate, and modernize their Windows workloads in the cloud.

Don’t let misleading cost comparisons prevent you from getting the most out of cloud. Let AWS help you assess how you can get the most out of cloud. Join all the AWS customers that trust us to run their most important applications in the best cloud for Windows workloads. If you want us to do an assessment for you, email us at [email protected].

Съветът гласува Директивата за авторското право в цифровия единен пазар

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/04/15/copyright-7/


(2)_Twitter_-_2019-04-15_15.22.36Текстът на новата директива от сайта на Съвета

Италия, Финландия, Люксембург, Нидерландия, Полша и Швеция гласуват „против“, а Белгия, Естония и Словения с „въздържал се”.

Ето и изявленията на държавите, които не подкрепят директивата или се въздържат.

Германия подкрепи директивата, благодарение на което тя беше приета – без  Германия нямаше да има нужното мнозинство, отдавна се знаеше. Макар че  Германия е направила изявление  по целия текст – да се въведяли спешни актуализации, може да се чуе във видеото, от 9-тата минута – и изявлението  завършва така:

Ако се окаже, че прилагането води до ограничаване на свободата на изразяване или ако посочените по-горе насоки се сблъскват с пречки в законодателството на ЕС, федералното правителство ще работи за коригиране на установените недостатъци в законодателството на ЕС в областта на авторското право.

Като слуша  германската министърка, човек се чуди как след толкова критика Германия все пак гласува за. И  обратно – какъв е смисълът да слушаме германската министърка, когато Германия – както и Обединеното кралство – могат да блокират приемането на директивата, но не го правят.

China Spying on Undersea Internet Cables

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/china_spying_on.html

Supply chain security is an insurmountably hard problem. The recent focus is on Chinese 5G equipment, but the problem is much broader. This opinion piece looks at undersea communications cables:

But now the Chinese conglomerate Huawei Technologies, the leading firm working to deliver 5G telephony networks globally, has gone to sea. Under its Huawei Marine Networks component, it is constructing or improving nearly 100 submarine cables around the world. Last year it completed a cable stretching nearly 4,000 miles from Brazil to Cameroon. (The cable is partly owned by China Unicom, a state-controlled telecom operator.) Rivals claim that Chinese firms are able to lowball the bidding because they receive subsidies from Beijing.

Just as the experts are justifiably concerned about the inclusion of espionage “back doors” in Huawei’s 5G technology, Western intelligence professionals oppose the company’s engagement in the undersea version, which provides a much bigger bang for the buck because so much data rides on so few cables.

This shouldn’t surprise anyone. For years, the US and the Five Eyes have had a monopoly on spying on the Internet around the globe. Other countries want in.

As I have repeatedly said, we need to decide if we are going to build our future Internet systems for security or surveillance. Either everyone gets to spy, or no one gets to spy. And I believe we must choose security over surveillance, and implement a defense-dominant strategy.

Евро-избори 2019: заявките за отваряне на секции продължават

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2019/eu2019-sabirane/

Както писах миналата седмица, формулярът на ЦИК за подаване на електронно заявление за гласуване в чужбина е достъпен. Такива може да се подават до 30-ти април и са важни по няколко причини.

Първо, секции може да се отварят на дадено място при 60 събрани заявления. Секции по този начин може да се отварят само на територията на ЕС и има някои ограничения в отделни страни. За Германия, например, може да се подават заявления за 18 места извън консулствата: Аахен, Бремен, Дармщат, Дортмунд, Дрезден, Ерфурт, Карлсруе, Констанц, Кьолн, Лайпциг, Магдебург, Мюнстер, Нюрнберг, Регенсбург, Фрайбург, Хамбург, Хановер и Щутгарт. За тях е искано разрешение, но все още се очаква отговор от немска страна.

Карта на местата в Германия, за които може да се дават заявления за гласуване

Освен това секции може да се отварят по предложение на дипломатичните представители и в консулства и посолства дори без събрани заявления. Броят на секциите (кутиите) в представителствата са по тяхна преценка и според физическата възможност на помещенията. По принцип втора и трета секция на дадено място се отварят след събрани 500 и 1000 заявления.

На изборите могат да гласуват всички български граждани, които имат адресна регистрация на територията на ЕС за поне 60 дни от последните 3 месеца преди изборите. Т.е. българи живеещи в ЕС.

Към този момент има над 1300 заявления. Очаква се скоро да се събрат над 60 заявления за Карлсруе и Ковънтри. Може да следите как се събират заявленията на таблицата на Glasuvam.org. Тя се обновява през 5 мин. показвайки всички места, където са събрани заявления, колко и кои са близо до секция. Тук има динамична карта, която се обновява в реално време:

Ако имате повече въпроси за гласуването в чужбина, може да прочетете решенията на ЦИК или да ги зададете в коментарите тук.

Watch Game of Thrones with a Raspberry Pi-powered Drogon

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/watch-game-of-thrones-with-raspberry-pi-powered-drogon/

Channel your inner Targaryen by building this voice-activated, colour-changing, 3D-printed Drogon before watching the next episode of Game of Thrones.

Winter has come

This is a spoiler-free zone! I’ve already seen the new episode of season 8, but I won’t ruin anything, I promise.

Even if you’ve never watched an episode of Game of Thrones (if so, that’s fine, I don’t judge you), you’re probably aware that the final season has started.

And you might also know that the show has dragons in it — big, hulking, scaley dragons called Rhaegal, Viserion, and Drogon. They look a little something like this:Daenerys-Targaryen-game-of-thrones

Well, not anymore. They look like this now:


Raspberry Pi voice-responsive dragon!

The creator of this project goes by the moniker Botmation. To begin with, they 3D printed modified a Drogon model they found on Thingiverse. Then, with Dremel in hand, they modified the print, to replace its eyes with RGB LEDs. Before drawing the LEDs through the hollowed-out body of the model, they soldered them to wires connected to a Raspberry Pi Zero W‘s GPIO pins.

Located in the tin beneath Drogon, the Pi Zero W is also equipped with a microphone and runs the Python code for the project. And thanks to Google’s Speech to Text API, Drogon’s eyes change colour whenever a GoT character repeats one of two keywords: white turns the eyes blue, while fire turns them red.

If you’d like more information about building your own interactive Drogon, here’s a handy video. At the end, Botmation asks viewers to help improve their code for a cleaner voice-activation experience.

3D printed Drogon with LED eyes for Game of Thrones

Going into the final season of Game of Thrones with your very own 3D printed Drogron dragon! The eyes are made of LEDs that changes between Red and Blue depending on what happens in the show. When you’re watching the show, Drogon will watch the show with you and listen for cues to change the eye color.

Drogon for the throne!

I’ve managed to bag two of the three dragons in the Pi Towers Game of Thrones fantasy league, so I reckon my chances of winning are pretty good thanks to all the points I’ll rack up by killing White Walker.

Wait — does killing a White Walker count as a kill, since they’re already dead?

Ah, crud.

The post Watch Game of Thrones with a Raspberry Pi-powered Drogon appeared first on Raspberry Pi.

How We Designed Loki to Work Easily Both as Microservices and as Monoliths

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2019/04/15/how-we-designed-loki-to-work-easily-both-as-microservices-and-as-monoliths/

In recent years, monoliths have lost favor as microservices increased in popularity. Conventional wisdom says that microservices, and distributed systems in general, are hard to operate: There are often too many dependencies and tunables, multiple moving parts, and operational complexity.

So when Grafana Labs built Loki – a service that optimizes search, aggregation, and exploration of logs natively in Grafana – the team was determined to incorporate the best of both worlds, making it much simpler to use, whether you want to run a single process on bare metal or microservices at hyperscale.

“Loki is single binary, has no dependencies, and can be run on your laptop without being connected to the Internet,” says Grafana Labs VP Product Tom Wilkie. “So it’s easy to develop on, deploy, and try.”

And if you want to run it with microservices at scale, adds Loki Engineer Edward Welch, “Loki lets you go from 1 node to 100 and 1 service to 10, to scale in a pretty straightforward fashion.”

Here’s a breakdown of how the Grafana Labs team developed the architecture of Loki to allow users “to have your cake and eat it too,” says Wilkie.

1. Easy to Deploy

With Loki, easy deployment became a priority feature after the team looked at the other offerings.

On the microservices side, “Kubernetes is well-known to be hard to deploy,” says Wilkie. “It is made of multiple components, they all need to be configured separately, they all do different jobs, they all need to be deployed in different ways. Hadoop would be exactly the same. There’s a big, whole ecosystem developed around just deploying Hadoop.”

The same criticisms even hold true for Wilkie’s other project, Cortex, with its multiple services and dependencies on Consul, Bigtable/DynamoDB/Cassandra, Memcached, and S3/GCS – although this is something Wilkie is actively working to improve.

The single-process, scale-out models such as Cassandra and Nomad have been gaining more traction recently because users can get started much more easily. “It just runs a binary on each node, and you’re done,” says Software Engineer Goutham Veeramachaneni.

So in this way, the team built Loki as a monolith: “One thing to deploy, one thing to manage, one thing to configure,” says Wilkie.

“That low barrier to entry is a huge advantage because it gets people using the project,” says Welch. “When you’re trying out an open source project and not sure if it’s the right thing for you, you don’t want to put all this time, effort, and investment into configuring and deploying the service while learning the best practices up front. You just want something that you could get started with immediately and quickly.”

2. Simple Architecture

With microservice architectures such as Kubernetes, “you don’t get any value running a scheduler or an API server on its own. Kubernetes only has a benefit when you run all the components in combination,” says Wilkie.

On the other end of the spectrum, single binary systems like Cassandra have no dependencies, and every process is the same within the system.

A lot of the inspiration for Loki was actually derived from Thanos, the open source project for running Prometheus at scale. While Thanos operates with a microservices approach, in which users have to deploy all services to run the system, it aligns each service around a given value proposition. If you want to globally aggregate your queries, you deploy Thanos queriers. If you want to do long-term storage, you deploy the store and sidecars. If you want to start doing down sampling, you deploy the Compactor.

“Every service you add incrementally adds benefit, and Thanos doesn’t introduce too many dependencies, so you can still run it locally,” says Wilkie. “And it doesn’t do all the jobs in the one Cassandra-style homogeneous single process.”

With Loki, Welch explains, “every instance can run the same services and has the same code. We don’t deploy different components – it’s a single binary. We deploy more of the same component and then specify what each component does at runtime. You can ask each process to do a single job, or all the jobs in one.”

So in the end, Loki users have flexibility in both dimensions. “You can deploy more of the same component – that’s closer to a Cassandra-style architecture where every process in the system is the same – or run it as a set of microservices.” explains Wilkie. “You can split those out and have every single function done in a separate process and a separate service. You’ve got that flexibility which you don’t get with Cassandra.”

3. Easy to Scale

The final consideration is how does the service grow as the user’s system grows?

Microservices have become the most popular option because by breaking up different functions into different services, there is the ability to isolate each service, use custom load balancing and a more specialized way of scaling and configuring.

“That’s why people went down this microservices architecture – it makes it very easy to isolate concerns from the development process,” says Wilkie. “You might have a separate team working on one service and a different team working on another. So if one service crashes, runs out of memory, pegs the CPU, or experiences trouble, it’s isolated and won’t necessarily affect the other services.”

The challenge of this approach, however, is when multiple problems arise at once. “Deploying lots of microservices makes config management hard,” says Wilkie. “If you’ve got 10 different components, diagnosing outages become trickier – you might not know which component is causing the problem.”

Which is why some engineers prefer a simpler approach. “I really like single binary because the biggest problem with deploying a distributed system is not deploying the distributed system, but gaining that expertise as to what to fix, what to look at,” says Veeramachaneni. “Having it run locally, having it on a single node, and experiencing the issues help users gain familiarity with the system. It also gives you that confidence that you can deploy it into production.”

The compromise: Loki has a single-process, no dependencies scale-out “mode” for ease of deployment and getting started, while allowing users to scale into a microservices architecture when ready.

“I would run a single-node Loki first, look at what breaks, and then scale out what doesn’t work,” says Veeramachaneni. And then, “slowly add that expertise.”

The Best of Both Worlds

“The nice thing about Loki is you can independently scale the right parts by splitting out the microservices,” Wilkie says. “When you want to run Loki at massive scale with microservices, you can. You can just run them all as different services, and you can introduce dependencies on Big Table, DynamoDB, S3, Memcached, or Consul when you want to.”

By design, Loki “gives you the ability to start and learn the system easily,” he adds, “but grows to the same kind of enterprise-ready architecture that microservices offer.”

For more about Grafana’s user interface for Loki, check out this blog post.

Transitioning to ISRG’s Root

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2019/04/15/transitioning-to-isrg-root.html

On July 8, 2019, we will change the default intermediate certificate we provide via ACME. Most subscribers don’t need to do anything. Subscribers who support very old TLS/SSL clients may want to manually configure the older intermediate to increase backwards compatibility.

Since Let’s Encrypt launched, our certificates have been trusted by browsers via a cross-signature from another Certificate Authority (CA) named IdenTrust. A cross-signature from IdenTrust was necessary because our own root was not yet widely trusted. It takes time for a new CA to demonstrate that it is trustworthy, then it takes more time for trusted status to propagate via software updates.

Now that our own root, ISRG Root X1, is widely trusted by browsers we’d like to transition our subscribers to using our root directly, without a cross-sign.

On July 8, 2019, Let’s Encrypt will start serving a certificate chain via the ACME protocol which leads directly to our root, with no cross-signature. Most subscribers don’t need to take any action because their ACME client will handle everything automatically. Subscribers who need to support very old TLS/SSL clients may wish to manually configure their servers to continue using the cross-signature from IdenTrust. You can test whether a given client will work with the newer intermediate by accessing our test site.

Our current cross-signature from IdenTrust expires on March 17, 2021. The IdenTrust root that we are cross-signed from expires on September 30, 2021. Within the next year we will obtain a new cross-signature that is valid until September 29, 2021. This means that our subscribers will have the option to manually configure a certificate chain that uses IdenTrust until September 29, 2021.

We’d like to thank IdenTrust for providing a cross-signature while we worked to get our own root trusted. They have been wonderful partners. IdenTrust believed in our mission to encrypt the entire Web when it seemed like a long-term dream. Together, in less than five years, we have helped to raise the percentage of encrypted page loads on the Web from 39% to 78%.

Let’s Encrypt is currently providing certificates for more than 160 million websites. We look forward to being able to serve even more websites as efforts like this make deploying HTTPS with Let’s Encrypt even easier. If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

Нюансирано за Асандж

Post Syndicated from Bozho original https://blog.bozho.net/blog/3310

Арестуваха Джулиан Асандж.

За едни той е боклук, който е изложил на опасност не само животите на американски войници и агенти, но самата американска демокрация.

За други е герой на свободното слово, който е разобличил американските власти нееднократно.

За първите той е прислужник на Кремъл, който използва свободата на словото като претекст да атакува Америка.

За вторите той е самоотвержен инструмент за на т.нар. whistleblowers (хора, които издават тайни, защото смятат, че е важно, обществото да ги знае), с които западните държави да поддържат нивото си на демократичност и прозрачност.

Реално… е по-сложно. Да, изтичането на класифицирана информация винаги крие рискове, ако не се прави внимателно, да, особено в последните години WikiLeaks и Кремъл работеха ръка за ръка, да, whistleblowers трябва да имат инструмент, с който да правят публично достояние силно спорни практики, като напр. Prism, и да, WikiLeaks имаше дисциплиниращ ефект.

Трябва ли обществото да знае всичко? По-скоро не. Концепцията за „класифицирана информация“ същество по обективни причини. Но определено има случаи, в които обществото трябва да знае за дадена класифицирана информация. Трябваше ли да знаем за това как американски войници застрелват цивилни и репортери на Ройтерс в Ирак, знаейки много добре, че не представляват опасност? Трябваше ли да знаем, че американското правителство е изградило сложна система за следене на всичко, което правим онлайн? Трябваше ли да знаем за писмата на Сурков (висшестоящ сътрудник в Кремъл), според които Русия има стратегическа цел да дестабилизира Украйна? Според мен и в трите случая, а и в много други – да. Неслучайно тези разкрития излязоха и през реномирани издания като The New York Times и The Guardian.

Асандж не е нито герой, нито боклук. Той може би е човек, преследващ даден идеал, но обстоятелствата превръщат това в гротескна война срещу САЩ (вероятно не без тяхна помощ, а и не без помощта на Русия). Когато не си подготвен да застанеш на това било, без да те отвее вятърът, се случва това. А много малко хора вероятно се подготвени.

Не харесвам Асандж. Всички щрихи за неговия характер, близостта му с Кремъл и не на последно място – обвиненията в изнасилване, не ми позволяват да го харесвам. Не мисля, че искам да го демонизирам, обаче. Нито да го героизирам, разбира се.

А арестуването му не трябва да дискредитира хората, които с риск за живота си, предоставят информация, която обществото трябва да знае. Те са важни – за демокрацията, за свалянето на режими и за предотвратяването на режими. Не всеки whistleblower е добър, не всяко изтичане на информация си струва. Никога в нищо няма абсолютност.

За съжаление мозъкът ни не е еволюирал за да различава сложните нюанси на все по-сложния свят. Ако беше, нямаше да има герои и злодеи. И Асандж нямаше да е разделящата фигура, която е в момента. Щеше да е просто малко луд, много безразсъден, до един момент може би идеалист, след един момент може би сключил сделка с грешните хора. Та така… надявам се да има справедлив процес.

[$] Expedited memory reclaim from killed processes

Post Syndicated from corbet original https://lwn.net/Articles/785709/rss

Running out of memory puts a Linux system into a difficult situation; in
the worst cases, there is often no way out other than killing one or more
processes to reclaim their memory. This killing may be done by the kernel
itself or, on systems like Android, by a user-space out-of-memory (OOM)
killer process. Killing a process is almost certain to make somebody unhappy;
the kernel should at least try to use that process’s memory expeditiously
so that, with luck, no other processes must die. That does not always
happen, though, in current kernels. This
patch set
from Suren Baghdasaryan aims to improve the situation, but
the solution that results in the end may take a different form.

Отново за проекта на Национална стратегия за развитие на българската култура

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/04/12/cult_str/

5 април 2019

Парламентарен контрол (стенограма)

ЕМИЛИЯ СТАНЕВА-МИЛКОВА (ГЕРБ): Уважаеми господин Председател, уважаеми господин Министър, уважаеми колеги! Въпросът ми е свързан с изработване на Национална стратегия за развитие на българската култура, която е публикувана вече в рамките на месец.
Уважаеми господин Министър, през годините нееднократно е повдигана темата за изработване на един основен стратегически документ, насочен към начертаване на визията за развитие на българската култура. Важна е необходимостта, осезателна е, видима е, от формулиране на дългосрочна програма и управленски цели, които да водят до конкретни политики, които да донесат резултатите, които всички ние очакваме. В същото време е важно да се определят усилията и ресурсите, които трябва да бъдат разпределени за изпълнението на тези стратегически задачи.
Във връзка с това е и моят въпрос: на какъв етап е Проектът за Национална стратегия за развитие на българската култура.
ПРЕДСЕДАТЕЛ ЯВОР НОТЕВ: Благодаря Ви, госпожо Милкова.
Заповядайте, господин Министър, да вземете отношение по този въпрос – за финализиране на Националната стратегия за развитието на българската култура. Имате думата.
Уважаеми господин Председател, уважаеми народни представители! Уважаема госпожо Милкова, изработването на Стратегия за развитие на българската култура е сложен и продължителен процес. Той включва в себе си както анализ на съществуващата среда, така и набелязване на конкретни цели и приоритети за постигане на крайната цел, която е свързана с разработването на дългосрочна политика за подкрепа на културата като национален приоритет.
През годините многократно са правени опити за изготвяне на такъв стратегически документ. В началото на мандата бе обявено, че изработването на Стратегия за развитие на българската култура с хоризонт от десет години занапред, ще бъде един от основните ни приоритети. Бе създадена широка работна група, в която бяха включени повече от 150 представители на различни организации и гилдии в сферата на културата. Със свои предложения се включиха и различни културни дейци и изследователи на културни процеси. Бяха проучени и чужди практики. Проведоха се дискусии и в сформираните по-малки работни групи по отделните области.
В резултат от това, на базата и на изготвен ситуационен анализ, се изработи проект на стратегия, в който са заложени основните приоритети и стратегически цели, по които да се работи в период от следващите 10 години. Този подход предполага изработване впоследствие на конкретни стратегически документи с визия за всеки един сектор поотделно. Посочена е и визията за развитие на културата в периода 2019 – 2029 г.
Приоритетно се поставя виждането за развитие на нови форми и политики. Това е свързано с осъществяването на непрекъснат анализ на културните процеси и актуализация на заложените цели. Постигането на подобно състояние на културата изисква подкрепа на стратегически значими инициативи, като опазване и насърчаване на културното многообразие, творческа мобилност и защита на интелектуалната собственост в областта на културата, както и създаването на условия за развитие на опазването на културното наследство, съвременните форми на културно изразяване, културните и творчески индустрии и качествено образование в областта на изкуството и културата.
Към настоящия момент Проектът за Стратегия за развитие на българската култура е в етап на обществено обсъждане. С цел това обсъждане да бъде максимално ефективно Министерството на културата планира поредица от обществени дискусии по сектори, които целят прецизиране на документа и неговото финализиране. Първата дискусия ще бъде проведена на 17 април, зала 3 на НДК. Благодаря.
ПРЕДСЕДАТЕЛ ЯВОР НОТЕВ: Благодаря Ви, господин Министър.
Госпожо Милкова, заповядайте, имате възможност за реплика.
ЕМИЛИЯ СТАНЕВА-МИЛКОВА (ГЕРБ): Уважаеми господин Председател, уважаеми господин Банов, уважаеми колеги! Искам да изразя моята удовлетвореност от отговора, който дадохте, и смятам, че е изключително важно, че Министерството на културата във Ваше лице успя най-накрая да създаде този стратегически документ, което е изключително важно за развитието на целия културен сектор.
Това, че предприемате стъпки за широко обсъждане, е изключително ценно, тъй като колегите от различните гилдии, от целия спектър на културния ни живот, както и представители на обществени, държавни и, разбира се, неправителствения сектор, който участва в създаването на българската култура, ще има възможност още един път да изкаже своето мнение и становище, защото всички сме наясно, че такъв сложен документ не може да се осъществи единствено и само с усилията на една от страните – говорим за разбиране от страна на всички и партньорство между различните представители в културния сектор.
Благодаря още веднъж и с удоволствие ще бъдем на 17 април в зала 3 на НДК, за да участваме в този дебат.


Въпросът идва от ГЕРБ,  отговорът  на министъра е приет с  въодушевление,  накратко: изглежда, че на този текст ще се даде ход. Въпросът е в това,  че никакво обществено обсъждане не може да изправи самия подход, който е възприет – стратегия за министерството, а не стратегия за културата – при това без адекватна финансова рамка.


Maliciously Tampering with Medical Imagery

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/maliciously_tam.html

In what I am sure is only a first in many similar demonstrations, researchers are able to add or remove cancer signs from CT scans. The results easily fool radiologists.

I don’t think the medical device industry has thought at all about data integrity and authentication issues. In a world where sensor data of all kinds is undetectably manipulatable, they’re going to have to start.

Research paper. Slashdot thread.

New Version of Flame Malware Discovered

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/new_version_of_.html

Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.

Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.

Note that the article claims that Flame was believed to be Israeli in origin. That’s wrong; most people who have an opinion believe it is from the NSA.

Raspberry Pi-controlled brass bell for ultimate the wake-up call

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-controlled-brass-bell-for-ultimate-the-wake-up-call/

Not one for rising with the sun, and getting more and more skilled at throwing their watch across the room to snooze their alarm, Reddit user ravenspired decided to hook up a physical bell to a Raspberry Pi and servo motor to create the ultimate morning wake-up call.


This has to be the harshest thing to wake up to EVER!

Wake up, Boo

“I have difficulty waking up in the morning” admits ravenspired, who goes by the name Darks Pi on YouTube. “My watch isn’t doing its job.”

Therefore, ravenspired attached a bell to a servo motor, and the servo motor to a Raspberry Pi. Then they wrote Python code in Raspbian’s free IDE software Thonny that rings the bell when it’s time to get up.

“A while loop searches for what time it is and checks it against my alarm time. When the alarm is active, it sends commands to the servo to move.”


While I’d be concerned about how securely attached the heavy brass bell above my head is, this is still a fun project, and an inventive way to address a common problem.

And it’s a lot less painful than this…

The Wake-up Machine TAKE #2

I built an alarm clock that slapped me in the face with a rubber arm to wake me up.I built an alarm clock that wakes me up in the morning by slapping me in the face with a rubber arm.

Have you created a completely over-engineered solution for a common problem? Then we want to see it!

The post Raspberry Pi-controlled brass bell for ultimate the wake-up call appeared first on Raspberry Pi.

[$] Counting corporate beans

Post Syndicated from corbet original https://lwn.net/Articles/785553/rss

Some things simply take time. When your editor restarted the search for a free accounting
system, he had truly hoped to be done by now. But life gets busy, and
accounting systems are remarkably prone to falling off the list of things
one wants to deal with in any given day. On the other hand, accounting can
return to that list quickly whenever LWN’s proprietary accounting software
does something particularly obnoxious. This turns out to be one of those
times, so your editor set out to determine whether beancount could do the job.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.