The right to be rude

Post Syndicated from esr original http://esr.ibiblio.org/?p=8609

The historian Robert Conquest once wrote: “The behavior of any bureaucratic organization can best be understood by assuming that it is controlled by a secret cabal of its enemies.”

Today I learned that the Open Source Initiative has reached that point of bureaucratization. I was kicked off their lists for being too rhetorically forceful in opposing certain recent attempts to subvert OSD clauses 5 and 6. This despite the fact that I had vocal support from multiple list members who thanked me for being willing to speak out.

It shouldn’t be news to anyone that there is an effort afoot to change – I would say corrupt – the fundamental premises of the open-source culture. Instead of meritocracy and “show me the code”, we are now urged to behave so that no-one will ever feel uncomfortable.

The effect – the intended effect, I should say, is to diminish the prestige and autonomy of people who do the work – write the code – in favor of self-appointed tone-policers. In the process, the freedom to speak necessary truths even when the manner in which they are expressed is unpleasant is being gradually strangled.

And that is bad for us. Very bad. Both directly – it damages our self-correction process – and in its second-order effects. The habit of institutional tone policing, even when well-intentioned, too easily slides into the active censorship of disfavored views.

The cost of a culture in which avoiding offense trumps the liberty to speak is that crybullies control the discourse. To our great shame, people who should know better – such as the OSI list moderators and BOD – have internalized anticipatory surrender to crybullying. They no longer even wait for the soi-disant victims to complain before wielding the ban-hammer.

We are being social-hacked from being a culture in which freedom is the highest value to one in which it is trumped by the suppression of wrongthink and wrongspeak. Our enemies – people like Coraline Ada-Ehmke – do not even really bother to hide this objective.

Our culture is not fatally damaged yet, but the trend is not good. OSI has been suborned and is betraying its founding commitment to freedom. “Codes of Conduct” that purport to regulate even off-project speech have become all too common.

Wake up and speak out. Embrace the right to be rude – not because “rude” in itself is a good thing, but because the degenerative slide into suppression of disfavored opinions has to be stopped right where it starts, at the tone policing.

Configuring and using monitoring and notifications in Amazon Lightsail

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/compute/configuring-and-using-monitoring-and-notifications-in-amazon-lightsail/

This post is contributed by Mike Coleman | Developer Advocate for Lightsail | Twitter: @mikegcoleman

We recently announced the release of resource monitoring, alarms, and notifications for Amazon Lightsail. This new feature allows you to set alarm thresholds on your Lightsail instances, databases, and load balancers. When those alarm thresholds are breached, you can be notified via the Lightsail console, SMS text message, and/or email.

In this blog, I walk you through the process of setting your notification contacts, creating a new alarm, and testing out the notifications. Additionally, you can simulate a failure to see how that is represented in the metrics graph, console, and notification endpoints.

The alarm you create during this blog notifies you whenever there are two or fewer healthy instances attached to your load balancer. In a production workload, this is a fairly critical scenario. So, you can configure Lightsail to notify you as soon as it discovers the problem.

Prerequisites
To complete this walkthrough, you need three running Lightsail instances, placed behind a Lightsail load balancer. If you need with help getting that done, check out our documentation on instances and load balancers. Don’t worry about what’s running on your instance, it’s not important for the sake of this walkthrough.

Configuring notification contacts
To get started, create notification contacts for both email and SMS. You can create one email contact per AWS Region where Lightsail operates. If you find that you need to notify multiple people via email, you can create an email distribution list with those contacts. Then, set Lightsail to send the email notification to that distribution list.

For SMS, you can only create contacts in specific Regions. For an up-to-date list of those Regions, go here:

Let’s get started creating the notification contacts. First, log into the AWS Management Console, navigate to the Lightsail home page, and follow the steps below.

  1. Click Account near the top right of the Lightsail home page, and click account from the dropdown menu.Image showing the location of the account drop down 
  2. Scroll down to the Notification contacts section and click + Add email address. Ensure that the correct AWS Region is selected. Image showing the location of the region drop down on the add regional email contact dialogue box
  3. Enter the email address where you want Lightsail to send alarm notifications in the text box.
  4. Click Add contact. Then, click I understand. This signifies that you understand that a verification email is sent to the email address you entered, and that notifications are not sent to that address until it’s verified.You verify your email address in the next section.
  5. Scroll down and click + Add SMS number, and ensure that the correct AWS Region is selected.
  6. Choose the country/Region for the phone number you want to use (note: this can be different than your AWS Region), and enter the mobile number you want to receive notifications at.
  7. Click Add contact.

Verifying the notification contact email address

Return to the Lightsail home page. There should be a banner at the top of the page notifying you that your email address must be verified.

  1. Access the email account that you set up in the previous section. You should find an email with the subject AWS Notification – Subscription Confirmation
  2. Open the email.
  3. In the email body near the bottom there is a link to click to verify your email address. Click the verification link. A webpage loads, which lets you know that your email address was successfully verified.Image showing the location of the confirm subscription link in the verification email
  4. Refresh the Lightsail home page, and the verification banner should now be gone.

Creating an alarm

Now you have your notification contacts set. It’s time to create the actual alarm. An alarm is defined by choosing a metric to monitor, and defining a threshold value and schedule for evaluation.

In our example today, we’re going to look at the healthy host count metric. As the name implies, this metric tells us how many healthy hosts are connected to our Lightsail load balancer. In this example, there are three instances attached to the load balancer, and you want to know if one of them becomes unhealthy as soon as possible.  In other words, when there are two or fewer healthy instances.

Lightsail reports metrics every five minutes. So the cadence you set to be notified is the first time the metric reports back unhealthy in any five-minute window. Follow the steps below to configure the alarm.

  1. From the Lightsail home page click on Networking from the horizontal menu, and then click on the name of your load balancer.
  2. From the horizontal menu click on Metric.
  3. Ensure that Healthy host count is selected from the metric dropdown menu.
  4. Scroll down the page and click +Add alarm
  5. Use the dialog box and drop-down menu to set the threshold for healthy host count to Less than or equal to 2. Set the time values so that an alarm is raised if the threshold is exceeded 1 time within the last 5 minutes.
  6. Leave the check boxes checked for SMS and email notifications.
  7. The alarm is created, but Lightsail may display a message indicating that the alarm is in insufficient data state. This disappears in a few minutes.

Test to make sure that you actually receive a notification when the alarm threshold is breached.

  1. To test that the notifications are configured correctly, click the three dot menu for your alarm. Image showing the location of the three dot menu
  2. From the drop down menu, choose Test alarm notification. After a minute or two, you should get both a text and an email notification.

Triggering the alarm with a simulated instance failure

So, at this point you have an alarm set, and you know that the notifications are working properly. In this final, step you simulate a failure to ensure that it’s detected correctly by Lightsail. That the alarm then sends the appropriate notifications.

To simulate a failure, you’re simply going to stop one of the instances attached to your load balancer by following the steps below.

  1. Return to the Lightsail home page.
  2. Click on the three dot menu for one of your instances and click
  3. Click Stop again to confirm.
  4. Return to the Load Balancers metrics page, and notice that after a few minutes the graph shows a drop in the healthy host count. Shortly after the graph updates the alarm should breach, and you should receive your text and email notifications.Note: Since our alarm threshold is evaluated after 5 minutes, it may take a few minutes for the alarm to trigger and notifications to be sent.
  5. Once you receive your text and/or email notifications, return to the Lightsail home page. Notice there is also a notification banner at the top of the page.
  6. Click on the three dot menu for the instance you previously stopped, and click Start.
  7. Return the load balancer metrics screen and wait for the graph to update and show three healthy instances. You may need to refresh your web browser.

As mentioned previously you can create alarms for a wide variety of metrics across databases, load balancers, and instances. For a complete list of metrics, check out the Lightsail documentation.

Conclusion

So, that’s all there is to it. If you spun up new instances for this simulation, make sure to terminate those to eliminate extra costs. If you’ve already got some critical resources running in Lightsail, now is a good time to set up some alarms and notifications. If you don’t have anything currently running in Lightsail, why not take advantage of our free 30-day offer for Lightsail instances, and build something today. Need more? Reach out to me @mikegcoleman, and check out Lightsail’s landing page for other resources and tutorials.

Automating code reviews and application profiling with Amazon CodeGuru

Post Syndicated from Nikunj Vaidya original https://aws.amazon.com/blogs/devops/automating-code-reviews-and-application-profiling-with-amazon-codeguru/

Amazon CodeGuru is a machine learning-based service released during re:Invent 2019 for automated code reviews and application performance recommendations. CodeGuru equips the development teams with the tools to maintain a high bar for coding standards in their software development process.

CodeGuru Reviewer helps developers avoid introducing issues that are difficult to detect, troubleshoot, reproduce, and root-cause. It also enables them to improve application performance. This not only improves the reliability of the software, but also cuts down the time spent chasing difficult issues like race conditions, slow resource leaks, thread safety issues, use of un-sanitized inputs, inappropriate handling of sensitive data, and application performance impact, to name a few.

CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon.

The service leverages machine-learning abilities to provide following two functionalities:

a) Reviewer: provides automated code reviews for static code analysis

b) Profiler: provides visibility into and recommendations about application performance during runtime

This blog post provides a short workshop to get a feel for both the above functionalities.

 

Solution overview

The following diagram illustrates a typical developer workflow in which the CodeGuru service is used in the code-review stage and the application performance-monitoring stage. The code reviewer is used for static code analysis backed with trained machine-learning models, and the profiler is used to monitor application performance when the code artifact is deployed and executed on the target compute.

Development Workflow

 

The following diagram depicts additional details to show the big picture in the overall schema of the CodeGuru workflow:

Big Picture Development Workflow
This blog workshop automates the deployment of a sample application from a GitHub link via an AWS CloudFormation template, including the dependencies needed. It also demonstrates the Reviewer functionality.

 

Pre-requisites

Follow these steps to get set up:

1. Set up your AWS Cloud9 environment and access the bash terminal, preferably in the us-east-1 region.

2. Ensure you have an individual GitHub account.

3. Configure an Amazon EC2 key-pair (preferably in the us-east-1 region) and make the .pem file available from the terminal being used.

 

CodeGuru Reviewer

This section demonstrates how to configure CodeGuru Reviewer functionality and associate it with the GitHub repository. Execute the following configuration steps:

Step 1: Fork the GitHub repository
First, log in to your GitHub account and navigate to this sample code. Choose Fork and wait for it to create a fork in your account, as shown in the following screenshot.

Figure shows how to fork a repository from GitHub

 

Step 2: Associate the GitHub repository
Log in to the CodeGuru dashboard and follow these steps:

1. Choose Reviewer from the left panel and choose Associate repository.

2. Choose GitHub and then choose Connect to GitHub.

3. Once authenticated and connection made, you can select the repository aws-codeguru-profiler-sample-application from the Repository location drop-down list and choose Associate, as shown in the following screenshot.

Associate repository with CodeGuru service

This associates the CodeGuru Reviewer with the specified repository and continues to listen for any pull-request events.

Associated repository with CodeGuru service

Step 3: Prepare your code
From your AWS Cloud9 terminal, clone the repository, create a new branch, using the following example commands:

git clone https://github.com/<your-userid>/aws-codeguru-profiler-sample-application.git
cd aws-codeguru-profiler-sample-application
git branch dev
git checkout dev
cd src/main/java/com/company/sample/application/

Open the file: CreateOrderThread.java and goto the line 63. Below line 63 which adds an order entry, insert the if statement under the comment to introduce an order entry check. Please indent the lines with spaces so they are well aligned as shown below.

SalesSystem.orders.put(orderDate, order);
//Check if the Order entered and present
if (SalesSystem.orders.containsKey(orderDate)) {
        System.out.println("New order verified to be present in hashmap: " + SalesSystem.orders.get(orderDate)); 
}
id++;

Once the above changes are introduced in the file, save and commit it to git and push it to the Repository.

git add .
git commit -s -m "Introducing new code that is potentially thread unsafe and inefficient"
cd ../../../../../../../
ls src/main/java/com/company/sample/application/

Now, upload the new branch to the GitHub repository using the following commands. Enter your credentials when asked to authenticate against your GitHub account:

git status
git push --set-upstream origin dev
ls

Step 4: Create a Pull request on GitHub:
In your GitHub account, you should see a new branch: dev.

1. Go to your GitHub account and choose the Pull requests tab.

2. Select New pull request.

3. Under Comparing Changes, select <userid>/aws-codeguru-profiler-sample-application as the source branch.

4. Select the options from the two drop-down lists that selects a merge proposal from the dev branch to the master branch, as shown in the following screenshot.

5. Review the code diffs shown. It should say that the diffs are mergeable (able to merge). Choose Create Pull request to complete the process.

Creating Pull request

This sends a Pull request notification to the CodeGuru service and is reflected on the CodeGuru dashboard, as shown in the following screenshot.

CodeGuru Dashboard

After a short time, a set of recommendations appears on the same GitHub page on which the Pull request was created.

The demo profiler configuration and recommendations shown on the dashboard are provided by default as a sample application profile. See the profiler section of this post for further discussion.

The following screenshot shows a recommendation generated about potential thread concurrency susceptibility:

 

CodeGuru Recommendations on GitHub

 

Another example below to show how the developer can provide feedback about recommendations using emojis:

How to provide feedback for CodeGuru recommendations

As you can see from the recommendations, not only are the code issues detected, but a detailed recommendation is also provided on how to fix the issues, along with a link to examples, and documentation, wherever applicable. For each of the recommendations, a developer can give feedback about whether the recommendation was useful or not with a simple emoji selection under Pick your reaction.

Please note that the CodeGuru service is used to identify difficult-to-find functional defects and not syntactical errors. Syntax errors should be flagged by the IDE and addressed at an early stage of development. CodeGuru is introduced at a later stage in a developer workflow, when the code is already developed, unit-tested, and ready for code-review.

 

CodeGuru Profiler

CodeGuru Profiler functionality focuses on searching for application performance optimizations, identifying your most “expensive” lines of code that take unnecessarily long times or higher-than-expected CPU cycles, for which there is a better/faster/cheaper alternative. It generates recommendations with actions you can take in order to reduce your CPU use, lower your compute costs, and overall improve your application’s performance. The profiler simplifies the troubleshooting and exploration of the application’s runtime behavior using visualizations. Examples of such issues include excessive recreation of expensive objects, expensive deserialization, usage of inefficient libraries, and excessive logging.

This post provides two sample application Demo profiles by default in the profiler section to demonstrate the visualization of CPU and latency characteristics of those profiles. This offers a quick and easy way to check the profiler output without onboarding an application. Additionally, there are recommendations shown for the {CodeGuru} DemoProfilingGroup-WithIssues application profile. However, if you would like to run a proof-of-concept with real application, please follow the procedure below.

The following steps launch a sample application on Amazon EC2 and configure Profiler to monitor the application performance from the CodeGuru service.

Step 1: Create a profiling group
Follow these steps to create a profiling group:

1. From the CodeGuru dashboard, choose Profiler from the left panel.

2. Under Profiling groups, select Create profiling group and type the name of your group. This workshop uses the name DemoProfilingGroup.

3. After typing the name, choose Create in the bottom right corner.

The output page shows you instructions on how to onboard the CodeGuru Profiler Agent library into your application, along with the necessary permissions required to successfully submit data to CodeGuru. This workshop uses the AWS CloudFormation template to automate the onboarding configuration and launch Amazon EC2 with the application and its dependencies.

Step 2: Run AWS Cloudformation to launch Amazon EC2 with the Java application:
This example runs an AWS CloudFormation template that does all the undifferentiated heavy lifting of launching an Amazon EC2 machine and installing JDK, Maven, and the sample demo application.

Once done, it configures the application to use a profiling group named DemoProfilingGroup, compiles the application, and executes it as a background process. This results in the sample demo application running in the region you choose, and submits profiling data to the CodeGuru Profiler Service under the DemoProfilingGroup profiling group created in the previous step.

To launch the AWS CloudFormation template that deploys the demo application, choose the following Launch Stack button, and fill in the Stack name, Key-pair name, and Profiling Group name.

Launch Button

Once the AWS CloudFormation deployment succeeds, log in to your terminal of choice and use ssh to connect to the Amazon EC2 machine. Check the running application using the following commands:

ssh -i '<path-to-keypair.pem-file>' [email protected]<ec2-ip-address>
java -version
mvn -v
ps -ef | grep SalesSystem  => This is the java application running in the background
tail /var/log/cloud-init-output.log  => You should see the following output in 10-15 minutes as INFO: Successfully reported profile

Once the CodeGuru agent is imported into the application, a separate profiler thread spawns when the application runs. It samples the application CPU and Latency characteristics and delivers them to the backend Profiler service for building the application profile.

Step 3: Check the Profiler flame-graphs:
Wait for 10-15 minutes for your profiling-group to become active (if not already) and for profiling data to be submitted and aggregated by the CodeGuru Profiler service.

Visit the Profiling Groups page and choose DemoProfilingGroup. You should see the following page showing your application’s profiling data in a visualization called a flame-graph, as shown in the screenshot below. Detailed explanation about flame-graphs and how to read them follow.

Profiler flame-graph visualization

Profiler extensively uses flame-graph visualizations to display your application’s profiling data since they’re a great way to convey issues once you understand how to read them.

The x-axis shows the stack profile population (collection of stack traces) sorted alphabetically (not chronologically), and the y-axis shows stack depth, counting from zero at the bottom. Each rectangle represents a stack frame. The wider a frame is is, the more often it was present in the stacks. The top edge shows what is on CPU, and beneath it is its ancestry. The colors are usually not significant (they’re picked randomly to differentiate frames).

As shown in the preceding screenshot, the stack traces for the three threads are shown, which are triggered by the code in the SalesSystem.java file.

1) createOrderThread.run

2) createIllegalOrderThread.run

3) listOrderThread.run

The flame-graph also depicts the stack depth and points out specific function names when you hover over that block. The marked areas in the flame-graph highlight the top block functions on-CPU and spikes in stack-trace. This may indicate an opportunity to optimize.

It is evident from the preceding diagram that significant CPU time is being used by an exception stack trace (leftmost). It’s also highlighted in the recommendation report as described in Step 4 below.

The exception is caused by trying to instantiate an Enum class giving it invalid String values. If you review the file CreateIllegalOrderThread.java, you should notice the constructors being called with illegal product names, which are defined in ProductName.java.

Step 4: Profiler Recommendations:
Apart from the real-time visualization of application performance described in the preceding section, a recommendation report (generated after a period of time) may appear, pointing out suspected inefficiencies to fix to improve the application performance. Once the recommendation appears, select the Recommendation link to see the details.

Each section in the Recommendations report can be expanded in order to get instructions on how to resolve the issue, or to examine several locations in which there were issues in your data, as shown in the following screenshot.

CodeGuru profiler recommendations

In the preceding example, the report includes an issue named Checking for Values not in enum, in which it conveys that more time (15.4%) was spent processing exceptions than expected (less than 1%). The reason for the exceptions is described in Step 3 and the resolution recommendations are provided in the report.

 

CodeGuru supportability:

CodeGuru currently supports native Java-based applications for the Reviewer and Profiler functionality. The Reviewer functionality currently supports AWS CodeCommit and all cloud-hosted non-enterprise versions of GitHub products, including Free/Pro/Team, as code repositories.

Amazon CodeGuru Profiler does not have any code repository dependence and works with Java applications hosted on Amazon EC2, containerized applications running on Amazon ECS and Amazon EKS, serverless applications running on AWS Fargate, and on-premises hosts with adequate AWS credentials.

 

Cleanup

At the end of this workshop, once the testing is completed, follow these steps to disable the service to avoid incurring any further charges.

1. Reviewer: Remove the association of the CodeGuru service to the repository, so that any further Pull-request notifications don’t trigger the CodeGuru service to perform an automated code-review.

2. Profiler: Remove the profiling group.

3. Amazon EC2 Compute: Go to the Amazon EC2 service, select the CodeGuru EC2 machine, and select the option to terminate the Amazon EC2 compute.

 

Conclusion

This post reviewed the CodeGuru service and implemented code examples for the Reviewer and Profiler functionalities. It described Reviewer functionality providing automated code-reviews and detailed guidance on fixes. The Profiler functionality enabled you to visualize your real-time application stack for granular inspection and generate recommendations that provided guidance on performance improvements.

I hope this post was informative and enabled you to onboard and test the service, as well as to leverage this service in your application development workflow.

 

About the Author

Author Photo

 

Nikunj Vaidya is a Sr. Solutions Architect with Amazon Web Services, focusing in the area of DevOps services. He builds technical content for the field enablement and offers technical guidance to the customers on AWS DevOps solutions and services that would streamline the application development process, accelerate application delivery, and enable maintaining a high bar of software quality.

 

 

Smarter Grids Pave the Way for More Open Grids

Post Syndicated from Lucas Laursen original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/open-smart-grids

As the sun sets across the Netherlands, streetlights twinkle on, town by town. But it’s not in lockstep: some city managers can set their lights to respond to local sunset time or a schedule of their own or they can control individual lights for local events. That’s because in 2017 those cities adopted a smart grid software platform built by Dutch public utility Alliander that may be the first open smart grid platform in everyday use.

Before, these cities could only operate their lights collectively because they used ripple control technology, a widespread control method that sends a pulse over the grid. While smarter control of streetlights may be handy for cities and save them some energy and cash, Alliander has also re-used the platform to manage a growing number of additional services and, earlier this month, passed control of the platform to LF Energy, part of the Linux Foundation.

“Utilities want to get rid of the black box,” says Shuli Goodman, executive director of LF Energy. Alliander started developing its own black box in 2013 but took it open source in 2015 thanks to lobbying by Sander Jansen, a data architect there. 

“What I saw was the big [grid software] vendors had their own roadmap, their own product managers, their own vision and it doesn’t always align with what clients want,” Jansen recalls. Developing their own solution gave Alliander more options and prevented it from being stuck with any one provider’s service. Now that it is open source, it also allows third parties to develop their own uses for the platform.

So far, most of the outside interest has been in smart meters, Jansen says. Another project involves interfacing with municipal charging stations for electric cars. Other projects focus on more traditional grid management concerns such as distribution automation.

The electricity grid’s relationship to open source actually dates back to 1997, if not before, when some North American utilities and research organizations used it to simulate local grid management scenarios. Academics also developed their own open source research tools, such as the 2005 open source grid tool called PSAT, developed by Federico Milano at University College in Dublin, Ireland. 

But there wasn’t much collaboration between academia and utilities, Milano says: “The [electric utility] community is very closed and not willing to help at all except for some, few individuals. The problem is [the people who use] open source tools are PhD students… Then, when they are hired by some company, they are forced to use some commercial software tool and do not have time to spare to contribute to the community with their code.”

Today, most major transmission and system operators still use commercial software, often from companies such as Siemens and ABB, with custom modifications. They also focus heavily on security, to ensure reliable electricity for hospitals and other critical infrastructure.

But changes in electricity supply may be favoring smarter grids and a more software-focused approach. As energy grids take on more intermittent sources of power, such as solar and wind, it can get harder for ripple control technology to send a reliable signal across the whole grid, Jansen says.

Other changes may also favor more openness, Milano says: “If power system ‘granularity’ is going to increase (e.g., grid-connected microgrids, smart building, aggregators, etc.), then there will be many small companies that will get into the power business from scratch and some of them might be attracted by the ‘open source software’ model.”

Pirate IPTV Box Seller Arrested By LAPD, ABS-CBN Files Multi-Million Dollar Lawsuits

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-box-seller-arrested-by-lapd-abs-cbn-files-multi-million-dollar-lawsuits-200227/

ABS-CBN is the largest media and entertainment company in the Philippines but is regularly active in US courts as it attempts to disrupt online piracy.

In April 2019, for example, a district court in Florida ordered the operators of 27 pirate sites to each pay $1 million in damages.

Then, last December, ABS-CBN sued a Texas man for millions of dollars after he allegedly sold pirate streaming devices via Facebook. It now appears that the media giant is set to expand the campaign against those involved in the supply of pirate IPTV devices.

According to ABS-CBN, on February 7, 2020, Los Angeles Police Department carried out a sting operation during which undercover officers purchased five ‘pirate’ set-top boxes from Romula Araneta Castillo, also known as Jon Castillo. The media company reports that the suspect was arrested for alleged violations of California Penal Code 593(d), which relates to “intercepting, receiving, or using any program or other service carried by a multichannel video.”

Just days later, ABS-CBN filed two lawsuits in US district courts, one against Castillo in California and another against his alleged cousin, Alberto Ace Mayol, in Texas. Both lawsuits allege violations of 47 U.S. Code § 605 (unauthorized publication or use of communications) and other offenses under state law.

“Upon information and belief, Defendant has been engaged in a scheme to, without authorization, sell Pirate Equipment that retransmits ABS-CBN’s programming to his customers as Pirate Services,” both of the complaints read.

“[I]n order to gain access to ABS-CBN’s protected communications and copyrighted content, Defendant’s Pirate Equipment is designed to illegally access ABS-CBN’s live communications. This system allows for the circumvention of ABS-CBN’s encryption technology and the reception, disclosure, and publication of ABS-CBN’s protected communications and copyrighted content.”

Together, the lawsuits against both men are worth millions of dollars in damages, should the full amounts be awarded. ABS-CBN appears to have made covert purchases itself and has published photographic evidence on its site.

“This arrest and accompanying civil lawsuits mark the first actions this year by ABS-CBN in a coming wave against the nationwide epidemic of IPTV box sellers,” the company said, commenting on the lawsuits.

“ABS-CBN conducted a months-long investigation into the scheme perpetrated by Castillo and his cousin, Alfaro, including undercover purchases from the targets. The lawsuits allege that Castillo and Alfaro engaged in this multi-state scheme to sell these pirated set top boxes to the unsuspecting public.”

ABS-CBN Global Anti-Piracy Head Elisha Lawrence thanked US police for their assistance.

“We are thankful for the cooperation of the LA Police Dept. in investigating and arresting Castillo, a kingpin in this pirate box scheme. Defrauding the public by selling these fake boxes is a scam operation and preying on innocent people. We are very happy to have the cooperation of the police to enforce against these pirates,” Lawrence said.

The civil lawsuits filed by ABS-CBN can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Digital Transformation Do-Over: How to Restart on the Right Foot

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/digital_transformation_leveraging_test_and_measurement_to_accelerate_success

Don’t give up on digital transformation projects that have not delivered as expected. The root cause of failure often lies in data preparation. This tech brief explains how to find out what went wrong and how to get it right the next time.

Get to know the latest AWS Heroes, including the first IoT Heroes!

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/get-to-know-the-latest-aws-heroes-including-the-first-iot-heroes/

The AWS Heroes program recognizes and honors individuals who are prominent leaders in local communities, known for sharing AWS knowledge and facilitating peer-to-peer learning in a variety of ways. The AWS Heroes program grows just as the enthusiasm for all things AWS grows in communities around the world, and there are now AWS Heroes in 35 countries.

Today we are thrilled to introduce the newest AWS Heroes, including the first Heroes in Bosnia, Indonesia, Nigeria, and Sweden, as well as the first IoT Heroes:

 

Joshua Arvin Lat – National Capital Region, Philippines

Machine Learning Hero Joshua Arvin Lat is the CTO of Complete Business Online, Insites, and Jepto. He has achieved 9 AWS Certifications, and has participated and contributed as a certification Subject Matter Expert to help update the AWS Certified Machine Learning – Specialty exam during the Item Development Workshops. He has been serving as one of the core leaders of the AWS User Group Philippines for the past 4-5 years and also shares knowledge at several international AWS conferences and including AWS Summit Singapore – TechFest, and AWS Community Day – Melbourne.

 

 

 

 

Nofar Asselman – Tel Aviv, Israel

Community Hero Nofar Asselman is the Head of Business Development at Epsagon – an automated tracing platform for cloud microservices, where she initiated Epsagon’s partnership with AWS. Nofar is a key figure at the AWS Partner Community and founded the first-ever AWS Partners Meetup Group. Nofar is passionate about her work with AWS cloud communities, organizes meetups regularly, and participates in conferences, events and user groups. She loves sharing insights and best practices about her AWS experiences in blog posts on Medium.

 

 

 

 

Filipe Barretto – Rio de Janeiro, Brazil

Community Hero Filipe Barretto is one of the founders of Solvimm, an AWS Consulting Partner since 2013. He organizes the AWS User Group in Rio de Janeiro, Brazil, promoting talks, hands-on labs and study groups for AWS Certifications. He also frequently speaks at universities, introducing students to Cloud Computing and AWS services. He actively participates in other AWS User Groups in Brazil, working to build a strong and bigger community in the country, and, when possible, with AWS User Groups in other Latin American countries.

 

 

 

 

Stephen Borsay – Portland, USA

IoT Hero Stephen Borsay is a Degreed Computer Engineer and electronic hobbyist with a passion to make IoT and embedded systems understandable and enjoyable to enthusiasts of all experience levels. Stephen authors community IoT projects, as well as develops online teaching materials focused on AWS IoT to solve problems for both professional developers and casual IoT enthusiasts. He founded the Digital Design meetup group in Portland, Oregon which holds regular meetings focusing on hands-on IoT training. He regularly posts IoT tutorials for Hackster.io and you can find his online AWS IoT training courses on YouTube and Udemy.

 

 

 

Ernest Chiang – Taipei City, Taiwan

Community Hero Ernest Chiang, also known as Deng-Wei Chiang, started his AWS journey in 2008. He has been passionate about bridging AWS technology with business through AWS related presentations at local meet-ups, conferences, and online blog posts. Since 2011, many AWS services have been adopted, across AWS Global and China regions, under Ernest’s leadership as the Director of Product & Technology Integration of PAFERS Tech.

 

 

 

 

 

Don Coleman – Philadelphia, USA

IoT Hero Don Coleman is the Chief Innovation Officer at Chariot Solutions, where he builds software that leverages a wide range of AWS services. His experience building IoT projects enables him to share knowledge and lead workshops on solving IoT challenges using AWS. He also enjoys speaking at conferences about devices and technology, discussing things like NFC, Bluetooth Low Energy, LoRaWAN, and AWS IoT.

 

 

 

 

 

Ken Collins – Norfolk, USA

Serverless Hero Ken Collins is a Staff Engineer at Custom Ink, focusing on DevOps and their Ecommerce Platform with an emphasis on emerging opportunities. With a love for the Ruby programming language and serverless, Ken continues his open source Rails work by focusing on using Rails with AWS Lambda using a Ruby gem called Lamby. Recently he wrote an ActiveRecord adapter to take advantage of Aurora Serverless with Rails on Lambda.

 

 

 

 

 

Ewere Diagboya – Lagos, Nigeria

Community Hero Ewere Diagboya started building desktop and web apps using PHP and VB as a software engineer in junior high school. He started his Cloud journey with AWS at Terragon Group, where he grew into the DevOps and Infrastructure Lead. Later he collaborated to speak at the first ever AWS Nigeria Meetup and was the only Nigerian representative at AWS Johannesburg Loft in 2019. He is the co-founder of DevOps Nigeria, shares videos on YouTube showcasing AWS technologies, and has a blog on Medium, called MyCloudSeries.

 

 

 

 

Dzenan Dzevlan – Mostar, Bosnia and Herzegovina

Community Hero Dzenan Dzevlan is a Cloud and DevOps expert at TN-TECH and has been an AWS user since 2011. In 2016, Dzenan founded AWS User Group Bosnia and helped it grow to three user groups with more than 600 members. This AWS community is now the largest IT community in Bosnia. As a part of his activities, he runs online meetups, a YouTube channel, and the sqlheisenberg.com blog (in Bosnian language) to help people in the Balkans region achieve their AWS certification and start working with AWS.

 

 

 

 

Ben Ellerby – London, United Kingdom

Serverless Hero Ben Ellerby is VP of Engineering for Theodo and a dedicated member of the Serverless community. He is the editor of Serverless Transformation: a blog, newsletter & podcast sharing tools, techniques and use cases for all things Serverless. Ben speaks about serverless at conferences and events around the world. In addition to speaking, he co-organizes and supports serverless events including the Serverless User Group in London and ServerlessDays London.

 

 

 

 

 

Gunnar Grosch – Karlstad, Sweden

Serverless Hero Gunnar Grosch is an evangelist at Opsio based in Sweden. With a focus on building reliable and robust serverless applications, Gunnar has been one of the driving forces in creating techniques and tools for using chaos engineering in serverless. He regularly and passionately speaks at events on these and other serverless topics around the world. Gunnar is also deeply involved in the community by organizing AWS User Groups and Serverless Meetups in the Nordics, as well as being an organizer of ServerlessDays Stockholm and AWS Community Day Nordics. A variety of his contributions can be found on his personal website.

 

 

 

Scott Liao – New Taipei City, Taiwan

Community Hero Scott Liao is a DevOps Engineer and Manager at 104 Corp. His work is predominantly focused on Data Center and AWS Cloud solution architecture. He is interested in building hyper-scale DevOps environments for containers using AWS CloudFormation, CDK, Terraform, and various open-source tools. Scott has spoken regularly as AWS-focused events including AWS User Groups, Cloud Edge Summit Taipei, DevOpsDays Taipei, and other conferences. He also shares his expertise to writing, by producing written content for blogs and IT magazines in Taiwan.

 

 

 

 

Austin Loveless – Denver, USA

Community Hero Austin Loveless is a Cloud Architect at Photobucket and Founder of the AWSMeetupGroup. He travels around the country, teaching people of all skill levels about AWS Cloud Technologies. He live-streams all his events on YouTube. He partners with large software companies (AWS, MongoDB, Confluent, Galvanize, Flatiron School) to help grow the meetup group and teach more people. Austin also routinely blogs on Medium under the handle AWSMeetupGroup.

 

 

 

 

Efi Merdler-Kravitz – Tel Aviv, Israel

Serverless Hero Efi Merdler-Kravitz is Director of Engineering at Lumigo.io, a monitoring and debugging platform for AWS serverless applications built on a 100% serverless backend. As an early and enthusiastic adopter of serverless technology, Efi has been racking up the air miles as a frequent speaker at serverless events around the globe, and writes regularly on the topic for the Lumigo blog. Efi began on his journey into serverless as head of engineering at Coneuron, building its entire stack on Lambda, S3, API Gateway, and Firebase, while perfecting the art of helping developers transition to a serverless mindset.

 

 

 

Dhaval Nagar – Surat, India

Serverless Hero Dhaval Nagar is the founder and director of cloud consulting firm AppGambit based in India. He thinks that serverless is not just another method but a big paradigm shift in modern computing that will have a major impact on future technologies. Dhaval has been building on AWS since early 2015. Coincidentally, the first service that he picked on AWS was Lambda. He has 11 AWS Certifications, is a regular speaker at AWS user groups and conferences, and frequently writes on his Medium blog. He runs the Surat AWS User Group and Serverless Group and has organized over 20 meetups since it started in 2018.

 

 

 

Tomasz Ptak – London, United Kingdom

Machine Learning Hero Tomasz Ptak is a software engineer with a focus on tackling technical debt, transforming legacy products to maintainable projects and delivering a Developer experience that enables teams to achieve their objectives. He was a participant in the AWS DeepRacer League, a winner in Virtual League’s September race and a 2019 season finalist. He joined the AWS DeepRacer Community on day one to become one of its leaders. He runs the community blog, the knowledge base and maintains a DeepRacer log analysis tool.

 

 

 

 

Mike Rahmati – Sydney, Australia

Community Hero Mike Rahmati is Co-Founder and CTO of Cloud Conformity (acquired by Trend Micro), a leader in public cloud infrastructure security and compliance monitoring, where he helps organizations design and build cloud solutions that are Well-Architected at all times. As an active community member, Mike has designed thousands of best practices for AWS, and contributed to a number of open source AWS projects including Cloud Conformity Auto Remediation using AWS Serverless.

 

 

 

 

Namrata Shah (Nam) – New York, USA

Community Hero Nam Shah is a dynamic passionate technical leader based in the New York/New Jersey Area focused on custom application development and cloud architecture. She has over twenty years of professional information technology consulting experience delivering complex systems. Nam loves to share her technical knowledge and frequently posts AWS videos on her YouTube Channel and occasionally posts AWS courses on Udemy.

 

 

 

 
 

Yan So – Seoul, South Korea

Machine Learning Hero Yan So is a senior data scientist who possesses a variety of experience dealing with business issues by utilizing big data and machine learning. He was a co-founder of the Data Science Group of the AWS Korea Usergroup (AWSKRUG) and hosted over 30 meetups and AI/ML hands-on labs since 2017. He regularly speaks on interesting topics such as Amazon SageMaker GroundTruth on AWS Community Day, Zigzag’s Data Analytics Platform at the AWS Summit Seoul, and a recommendation engine on Amazon Personalize in AWS Retail & CPG Day 2019.
 

 

 

 

Steve Teo – Singapore

Community Hero Steve Teo has been serving the AWS User Group Singapore Community since 2017, which has over 5000 members. Having benefited from Meetups at the start of his career, he makes it his personal mission to pay it forward and build the community so that others might reap the benefits and contribute back. The community in Singapore has grown to have monthly meetups and now includes sub-chapters such as the Enterprise User Group, as well as Cloud Seeders, a member-centric Cloud Learning Community for Women, Built by Women. Steve also serves as a speaker in AWS APAC Community Conferences, where he shares on his Speakerdeck.
 

 

 

 

Hein Tibosch – Bali, Indonesia

IoT Hero Hein Tibosch is a skilled software developer, specializing in embedded applications and working as an independent at his craft for over 17 years. Hein is exemplary in his community contributions for FreeRTOS, as an active committer to the FreeRTOS project and the most active customer on the FreeRTOS Community Forums. Over the last 8 years, Hein’s contributions to FreeRTOS have made a significant impact on the successful adoption of FreeRTOS by embedded developers of all technical levels and backgrounds.
 

 

 

 
You can learn all about the AWS Heroes and connect with a Hero near you by visiting the AWS Hero website.

Ross;

When Artists, Engineers, and PepsiCo Collaborated, Then Clashed at the 1970 World’s Fair

Post Syndicated from W. Patrick McCray original https://spectrum.ieee.org/tech-history/silicon-revolution/when-artists-engineers-and-pepsico-collaborated-then-clashed-at-the-1970-worlds-fair

On 18 March 1970, a former Japanese princess stood at the center of a cavernous domed structure on the outskirts of Osaka. With a small crowd of dignitaries, artists, engineers, and business executives looking on, she gracefully cut a ribbon that tethered a large red balloon to a ceremonial Shinto altar. Rumbles of thunder rolled out from speakers hidden in the ceiling. As the balloon slowly floated upward, it appeared to meet itself in midair, reflecting off the massive spherical mirror that covered the walls and ceiling.

With that, one of the world’s most extravagant and expensive multimedia installations officially opened, and the attendees turned to congratulate one another on this collaborative melding of art, science, and technology. Underwritten by PepsiCo, the installation was the beverage company’s signal contribution to Expo ’70, the first international exposition to be held in an Asian country.

A year and a half in the making, the Pepsi Pavilion drew eager crowds and elicited effusive reviews. And no wonder: The pavilion was the creation of Experiments in Art and Technology—E.A.T.—an influential collective of artists, engineers, technicians, and scientists based in New York City. Led by Johan Wilhelm “Billy” Klüver, an electrical engineer at Bell Telephone Laboratories, E.A.T. at its peak had more than a thousand members and enjoyed generous support from corporate donors and philanthropic foundations. Starting in the mid-1960s and continuing into the ’70s, the group mounted performances and installations that blended electronics, lasers, telecommunications, and computers with artistic interpretations of current events, the natural world, and the human condition.

E.A.T. members saw their activities transcending the making of art. Artist–engineer collaborations were understood as creative experiments that would benefit not just the art world but also industry and academia. For engineers, subject to vociferous attacks about their complicity in the arms race, the Vietnam War, environmental destruction, and other global ills, the art-and-technology movement presented an opportunity to humanize their work.

Accordingly, Klüver and the scores of E.A.T. members in the United States and Japan who designed and built the pavilion considered it an “experiment in the scientific sense,” as the 1972 book Pavilion: Experiments in Art and Technology stated. Klüver pitched the installation as a “piece of hardware” that engineers and artists would program with “software” (that is, live performances) to create an immersive visual, audio, and tactile experience. As with other E.A.T. projects, the goal was not about the product but the process.

Pepsi executives, unsurprisingly, viewed their pavilion on somewhat different terms. These were the years of the Pepsi Generation, the company’s mildly countercultural branding. For them, the pavilion would be at once an advertisement, a striking visual statement, and a chance to burnish the company’s global reputation. To that end, Pepsi directed close to US $2 million (over $13 million today) to E.A.T. to create the biggest, most elaborate, and most expensive art project of its time.

Perhaps it was inevitable, but over the 18 months it took E.A.T. to execute the project, Pepsi executives grew increasingly concerned about the group’s vision. Just a month after the opening, the partnership collapsed amidst a flurry of recriminating letters and legal threats. And yet, despite this inglorious end, the participants considered the pavilion a triumph.

The pavilion was born during a backyard conversation in the fall of 1968 between David Thomas, vice president in charge of Pepsi’s marketing, and his neighbor, Robert Breer, a sculptor and filmmaker who belonged to the E.A.T. collective. Pepsi had planned to contract with Disney to build its Expo ’70 exhibition, as it had done for the 1964 World’s Fair in New York City. Some Pepsi executives were, however, concerned that the conservative entertainment company wouldn’t produce something hip enough for the burgeoning youth market, and they had memories of the 1964 project, when Disney ran well over its already considerable budget. Breer put Thomas in touch with Klüver, productive dialogue ensued, and the company hired E.A.T. in December 1968.

Klüver was a master at straddling the two worlds of art and science. Born in Monaco in 1927 and raised in Stockholm, he developed a deep appreciation for cinema as a teen, an interest he maintained while studying with future Nobel physicist Hannes Alfvén. After earning a Ph.D. in electrical engineering at the University of California, Berkeley, in 1957, he accepted a coveted research position at Bell Labs in Murray Hill, N.J.

While keeping up a busy research program, Klüver made time to explore performances and gallery openings in downtown Manhattan and to seek out artists. He soon began collaborating with artists such as Yvonne Rainer, Andy Warhol, Jasper Johns, and Robert Rauschenberg, contributing his technical expertise and helping to organize exhibitions and shows. His collaboration with Jean Tinguely on a self-destructing sculpture, called Homage to New York, appeared on the April 1969 cover of IEEE Spectrum. Klüver emerged as the era’s most visible and vocal spokesperson for the merger of art and technology in the United States. Life magazine called him the “Edison-Tesla-Steinmetz-Marconi-Leonardo da Vinci of the American avant-garde.”

Klüver’s supervisor, John R. Pierce, was tolerant and even encouraging of his activities. Pierce had his own creative bent, writing science fiction in his spare time and collaborating with fellow Bell engineer Max Mathews to create computer-generated music. Meanwhile, Bell Labs, buoyed by the economic prosperity of the 1960s, supported a small coterie of artists-in-residence, including Nam June Paik, Lillian Schwartz, and Stan VanDerBeek.

In time, Klüver devised more ambitious projects. For his 1966 orchestration of 9 Evenings: Theatre and Engineering, nearly three dozen engineering colleagues worked with artists to build wireless radio transmitters, carts that floated on cushions of air, an infrared television system, and other electronics. Held at New York City’s 69th Regiment Armory—which in 1913 had hosted a pathbreaking exhibition of modern art9 Evenings expressed a new creative culture in which artists and engineers collaborated.

In the midst of organizing 9 Evenings, Klüver, along with artists Rauschenberg and Robert Whitman and Bell Labs engineer Fred Waldhauer, founded Experiments in Art and Technology. By the end of 1967, more than a thousand artists and technical experts had joined. And a year later, E.A.T. had scored the commission to create the Pepsi Pavilion.

From the start, E.A.T. envisioned the pavilion as a multimedia environment that would offer a flexible, personalized experience for each visitor and that would express irreverent, uncommercial, and antiauthoritarian values.

But reaching consensus on how to realize that vision took months of debate and argument. Breer wanted to include his slow-moving cybernetic “floats”—large, rounded, self-driving sculptures powered by car batteries. Whitman was becoming intrigued with lasers and visual perception, and felt there should be a place for that. Forrest “Frosty” Myers argued for an outdoor light installation using searchlights, his focus at the time. Experimental composer David Tudor imagined a sophisticated sound system that would transform the Pepsi Pavilion into both recording studio and instrument.

“We’re all painters,” Klüver recalled Rauschenberg saying, “so let’s do something nonpainterly.” Rauschenberg’s attempt to break the stalemate prompted a further flood of suggestions. How about creating areas where the temperature changed? Or pods that functioned as anechoic chambers—small spaces of total silence? Maybe the floor could have rear-screen projections that gave visitors the impression of walking over flames, clouds, or swimming fish. Perhaps wind tunnels and waterfalls could surround the entrances.

Eventually, Klüver herded his fellow E.A.T. members into agreeing to an eclectic set of tech-driven pieces. The pavilion building itself was a white, elongated geodesic dome, which E.A.T. detested and did its best to obscure. And so a visitor approaching the finished pavilion encountered not the building but a veil of artificial fog that completely enshrouded the structure. At night, the fog was dramatically lit and framed by high-intensity xenon lights designed by Myers.

On the outdoor terrace, Breer’s white floats rolled about autonomously like large bubbles, emitting soft sounds—speech, music, the sound of sawing wood—and gently reversing themselves when they bumped into something. Steps led downward into a darkened tunnel, where visitors were greeted by a Japanese hostess wearing a futuristic red dress and bell-shaped hat and handed a clear plastic wireless handset. Stepping farther into the tunnel, they would be showered with red, green, yellow, and blue light patterns from a krypton laser system, courtesy of Whitman.

Ascending into the main pavilion, the visitors’ attention would be drawn immediately upward, where their reflections off the huge spherical mirror made it appear that they were floating in space. The dome also created auditory illusions, as echoes and reverberations toyed with people’s sense of acoustic reality. The floors of the circular room sloped gently upward to the center, where a glass insert in the floor allowed visitors to peer down into the entrance tunnel with its laser lights. Other parts of the floor were covered in different materials and textures—stone, wood, carpet. As the visitor moved around, the handset delivered a changing array of sounds. While a viewer stood on the patch of plastic grass, for example, loop antennas embedded in the floor might trigger the sound of birds or a lawn mower.

The experience was deeply personal: You could wander about at your own pace, in any direction, and compose your own trippy sensory experience.

To pull off such a feat of techno-art required an extraordinary amount of engineering. The mirror dome alone took months to design and build. E.A.T. viewed the mirror as, in Frosty Myers’s words, the “key to the whole Pavilion,” and it dictated much of what was planned for the interior. The research and testing for the mirror largely fell to members of E.A.T.’s Los Angeles chapter, led by Elsa Garmire. The physicist had done her graduate work at MIT with laser pioneer Charles Townes and then accepted a postdoc in electrical engineering at Caltech. But Garmire found the environment for women at Caltech unsatisfying, and she began to consider the melding of art and engineering as an alternate career path.

After experimenting with different ideas, Garmire and her colleagues designed a mirror modeled after the Mylar balloon satellites launched by NASA. A vacuum would hold the mirror’s Mylar lining in place, while a rigid outer shell held in the vacuum. E.A.T. unveiled a full-scale prototype of the mirror in September 1969 in a hangar at a Marine Corps airbase. It was built by G.T. Schjeldahl Co., the Minnesota-based company responsible for NASA’s Echo and PAGEOS [PDF] balloon satellites. Gene Youngblood, a columnist for an underground newspaper, found himself mesmerized when he ventured inside the “giant womb-mirror” for the first time. “I’ve never seen anything so spectacular, so transcendentally surrealistic.… The effect is mind-shattering,” he wrote. What you saw depended on the ambient lighting and where you were standing, and so the dome fulfilled E.A.T.’s goal of providing each visitor with a unique, interactive experience. Such effects didn’t come cheap: By the time Expo ’70 started, the cost of the pavilion’s silver lining came to almost $250,000.

An even more visually striking feature of the pavilion was its exterior fog. Ethereal in appearance, it required considerable real-world engineering to execute. This effort was led by Japanese artist Fujiko Nakaya, who had met Klüver in 1966 in New York City, where she was then working. Born in 1933 on the northern island of Hokkaido, she was the daughter of Ukichiro Nakaya, a Japanese physicist famous for his studies of snow crystals. When E.A.T. got the Pepsi commission, Klüver asked Fujiko to explore options for enshrouding the pavilion in clouds.

Nakaya’s aim was to produce a “dense, bubbling fog,” as she wrote in 1972, for a person “to walk in, to feel and smell, and disappear in.” She set up meteorological instruments at the pavilion site to collect baseline temperature, wind, and humidity data. She also discussed several ways of generating fog with scientists in Japan. One idea they considered was dry ice. Solid chunks of carbon dioxide mixed with water or steam could indeed make a thick mist. But the expo’s health officials ruled out the plan, claiming the massive release of CO2 would attract mosquitoes.

Eventually, Nakaya decided that her fog would be generated out of pure  water. For help, she turned to Thomas R. Mee, a physicist in the Pasadena area whom Elsa Garmire knew. Mee had just started his own company to make instruments for weather monitoring. He had never heard of Klüver or E.A.T., but he knew of Nakaya’s father’s pioneering research on snow.

Mee and Nakaya figured out how to create fog by spraying the water under high pressure through copper lines fitted with very narrow nozzles. The lines hugged the edges of the geodesic structure, and the 2,500 or so nozzles atomized some 41,600 liters of water an hour. The pure white fog spilled over the structure’s angled and faceted roof and drifted gently over the fairground. Breer compared it to the clouds found in Edo-period Japanese landscape paintings.

While the fog and mirrored dome were the pavilion’s most obvious features, hidden away in a control room sat an elaborate computerized sound system.

Designed by Tudor, the system could accept signal inputs from 32 sources, which could be modified, amplified, and toggled among 37 speakers. The sources could be set to one of three modes: “line sound,” in which the sound switched rapidly from speaker to speaker in a particular pattern; “point sound,” in which the sound emanated from one speaker; and “immersion” or “environmental” mode, where the sound seemed to come from all directions. “The listener would have the impression that the sound was somehow embodied in a vehicle that was flying about him at varying speeds,” Tudor explained.

The audio system also served as an experimental lab. Much as researchers might book time on a particle accelerator or a telescope, E.A.T. invited “resident programmers” to apply to spend several weeks in Osaka exploring the pavilion’s potential as an artistic instrument. The programmers would have access to a library of several hundred “natural environmental sounds” as well as longer recordings that Tudor and his colleagues had prepared. These included bird calls, whale songs, heartbeats, traffic noises, foghorns, tugboats, and ocean liners. Applicants were encouraged to create “experiences that tend toward the real rather than the philosophical.” Perhaps in deference to its patron’s conservatism, E.A.T. specified it was “not interested in political or social comment.”

In sharp contrast to E.A.T.’s sensibilities, Pepsi executives didn’t view the pavilion as an experiment or even a work of art but rather as a product they had paid for. Eventually, they decided that they were not well pleased by what E.A.T. had delivered. On 20 April 1970, little more than a month after the pavilion opened to the public, Pepsi informed Klüver that E.A.T.’s services were no longer needed. E.A.T. staff who had remained in Osaka to operate the pavilion smuggled the audio tapes out, leaving Pepsi to play a repetitive and banal soundtrack inside its avant-garde building for the remaining months of the expo.

Despite E.A.T.’s abrupt ouster, many critics responded favorably to the pavilion. A Newsweek critic called it “an electronic cathedral in the shape of a geodesic dome,” neither “fine art nor engineering but a true synthesis.” Another critic christened the pavilion a “total work of art”—a Gesamtkunstwerk—in which the aesthetic and technological, human and organic, and mechanical and electric were united.

In hindsight, the Pepsi Pavilion was really the apogee for the art-and-technology movement that burst forth in the mid-1960s. This first wave did not last. Some critics contended that in creating corporate-sponsored large-scale collaborations like the pavilion, artists compromised themselves aesthetically and ethically—“freeload[ing] at the trough of that techno-fascism that had inspired them,” as one incensed observer wrote. By the mid-1970s, such expensive and elaborate projects had become as discredited and out of fashion as moon landings.

Nonetheless, for many E.A.T. members, the Pepsi Pavilion left a lasting mark. Elsa Garmire’s artistic experimentation with lasers led to her cofounding a company, Laser Images, which built equipment for laser light shows. Riffing on the popularity of planetarium shows, the company named its product the “laserium,” which soon became a pop-culture fixture.

Meanwhile, Garmire shifted her professional energies back to science. After leaving Caltech for the University of Southern California, she went on to have an exceptionally successful career in laser physics. She served as engineering dean at Dartmouth College and president of the Optical Society of America. Years later, Garmire said that working with artists influenced her interactions with students, especially when it came to cultivating a sense of play.

After Expo ’70 ended, Mee filed for a U.S. patent to cover an “Environmental Control Method and Apparatus” derived from his pavilion work. As his company, Mee Industries, grew, he continued his collaborations with Nakaya. Even after Mee’s death in 1998, his company contributed hardware to installations Nakaya designed for the Guggenheim Museum in Bilbao, Spain. More recently, her Fog Bridge [PDF] was integrated into the Exploratorium building in San Francisco.

Billy Klüver insisted that the success of his organization would ultimately be judged by the degree to which it became redundant. By that measure, E.A.T. was indeed a success, even if events didn’t unfold quite the way he imagined. At universities in the United States and Europe, dozens of programs now explore the intersections of art, technology, engineering, and design. It’s common these days to find tech-infused art in museum collections and adorning public spaces. Events like Burning Man and its many imitators continue to explore the experimental edges of art and technology—and to emphasize the process over the product.

And that may be the legacy of the pavilion and of E.A.T.: They revealed that engineers and artists could forge a common creative culture. Far from being worlds apart, their communities share values of entrepreneurship, adaptability, and above all, the collective desire to make something beautiful.

This article appears in the March 2020 print issue as “Big in Japan.”

[$] An end to high memory?

Post Syndicated from corbet original https://lwn.net/Articles/813201/rss

This
patch
from Johannes Weiner seemed like a straightforward way to improve
memory-reclaim performance; without it, the virtual filesystem layer throws
away memory that the memory-management subsystem thinks is still worth
keeping. But that patch quickly ran afoul of a feature (or “misfeature”
depending on who one asks) from the distant past,
one which goes by the name of “high memory”. Now, more than 20 years after its
addition, high memory may be
brought down low, as developers consider whether it should be deprecated
and eventually removed from the kernel altogether.

50 Billion Restores and Counting

Post Syndicated from Yev original https://www.backblaze.com/blog/50-billion-restores-and-counting/

Backblaze Over 50 Billion Served

50,000,000,000—that’s a large number. It also happens to be the milestone that we crossed (on February 5th, 2020 at 14:47 UTC) for files restored from our Computer Backup service! Back in 2016, Backblaze hit 20 Billion files restored for our customers. It took us almost 9 years to get to that number, and only another 4 years to more than double it (and that’s not even including all the Backblaze B2 Cloud Storage files that get accessed and downloaded every day).

50 Billion is a giant number, but it’s not just a number to us. It’s baby pictures, first step videos, PhD theses, long lost tax forms from years past, powerpoint presentations, digitized family albums, art projects, documents and writing, manuscripts, book outlines, and all manner of memories. We love that we’ve built a sustainable business around restoring people’s files which they may have thought were lost forever.

The last time we wrote about a restore milestone we went in and took a look at a typical month in the life of our restore system. Lets revisit that and take a look at the stats for January 2020, with a few new ones thrown in:

January 2020 Stats:

  • 28,841 Total Restores
  • 1,119,500,858 (1.1 Billion) Total Files Restored
  • 2.17 Petabytes of Data Restored
  • 3 Terabytes per hour—equivalent to a good sized external hard drive
  • 48 Gigabytes per minute—about one 4K UHD Blu-Ray movie
  • 810 Megabytes per second—just over one CD’s worth of data

Restores By Operating System:

  • 49.08% were Mac
  • 50.92% were Windows

Of all January 2020 restores:

  • 97.82% were Zip
  • 1.63% were USB HD
  • 0.54% were USB Flash Drive

The Average Amount of Files Per Restore:

  • 29,927 files – Zip
  • 518,756.23 – USB HD
  • 232,711.93 files – USB Flash Drive

The Average Size Of a Restore:

  • 42.16 GB – Zip
  • 2,081.42 GB – USB HD
  • 131.95 GB – USB Flash Drive

Total Data Restored:

  • Bytes: 2,169,762,976,872,020
  • Kilobytes: 2,169,762,976,872.02
  • Megabytes: 2,169,762,976.87
  • Gigabytes: 2,169,762.98
  • Terabytes: 2,169.76
  • Petabytes: 2.17

Based on ZIP restores:

Range in GB% of Restores
< 143.65%
1 – 1019.38%
10 – 2511.90%
25 – 508.90%
50 – 752.98%
75 – 1001.92%
100 – 2004.80%
200 – 3002.38%
300 – 4001.60%
400 – 5001.41%
> 5001.06%

We started Backblaze with a goal of preventing data loss, and we’re now recovering over 2 Petabytes of data per month, which is a stat that we are, to say the least, very proud of. To put that into perspective, it took us 2 ½ years to reach 2 Petabytes of customer data under management. Now we’re helping our customers restore that amount of data on a monthly basis.

We want to thank our Backblaze customers, and remind folks of how easy it is to restore data with us. You can download it for free via the web, recover your files via a USB Hard Drive or Flash Key, and use our Mobile apps to access your data on iOS and Android! To learn more, visit our restore webpage. If you want to test a restore, try this easy web guide:

Web Guide for Restoring Data

Do you have a great story of Backblaze helping you recover data? We’d love to hear it and possibly highlight it in a future blog post. Just comment below with the story of how Backblaze helped you get your data back! Need an example? Here’s a great one.

The post 50 Billion Restores and Counting appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cornell’s Prototype Low-Energy Particle Accelerator Completes Key Test

Post Syndicated from Dan Garisto original https://spectrum.ieee.org/green-tech/conservation/cornells-prototype-lowenergy-particle-accelerator-completes-key-test

Accelerator physicists from Cornell University and Brookhaven National Laboratory have facilitated an unprecedented energy handoff between electrons.

As particle colliders have gotten bigger and more expensive to build and operate, physicists have begun to look for nontraditional ways to accelerate particles. One potential solution is to use energy recovery linear accelerators, or ERLs. These new particle accelerators transfer energy from decelerating electrons to give fresh particles a boost—similar to the way speed skaters transfer energy by physically pushing their teammates forward to begin each new leg of a relay.

CBETA (short for Cornell-BNL ERL Test Accelerator) is a proof-of-concept experiment for such next-generation accelerating technology. Last December, researchers managed to achieve what’s called eight-pass energy recovery for CBETA, a benchmark that shows the technology’s potential for future colliders.

Conventional particle accelerators fall into one of two main classes: linear accelerators or storage rings. Linear accelerators, also known as linacs, are hollow metal chambers filled with strong electric fields. These fields flip on and off, and with the right timing, charged particles inside the chambers can be propelled forward or backward. The resulting particle beam is dense but has relatively few particles.

Storage rings circulate particles millions of times by bending their paths with magnets. Particles can be continually injected into a storage ring, which creates a beam with more particles. But the beam has lower density, becoming diluted as the particles circulate.

Georg Hoffstaetter, a physics professor at Cornell who leads CBETA, says ERLs combine the strengths of both. “We have two traditional accelerator technologies: linacs, which can provide low current but very dense beams, and rings, which can provide high current but less dense beams,” he says. “An ERL merges these two technologies to get both advantages—to get high currents for very dense beams.”

Trying to make two beams of particles collide results mostly in misses because the particles are incredibly small. Physicists love dense beams and high currents because both qualities provide more collisions and therefore more data.

The concept of an ERL has been around since 1965, when Cornell physicist Maury Tigner proposed it, but the technology has become attractive only in recent years, in part because of how complex the energy handoff is to execute.

In ERLs, particles are initially accelerated by a linear accelerator. Magnets then “loop” the particles back to the beginning so that they pass through the linear accelerator again. In CBETA, electrons make eight full passes. On the first four, the electrons gain energy. But after the fourth pass, they arrive out of sync, and the electric field, instead of pushing them forward, slows them down.

As with speed skaters, when these electrons slow down they lose their kinetic energy. But energy is conserved—it has to go somewhere. For skaters, the energy moves through a push to the next skater; for electrons, the energy moves through the electric field to the next accelerating electron. After an electron finishes its fourth deceleration, it’s discarded.

Because they combine the advantages of both linacs and storage rings, ERLs present a tempting alternative to current collider tech. Besides CBETA, a few other ERLs have achieved full energy recovery, but not for eight passes. More passes give the electrons higher energy, but this also makes the particles more difficult to control.

“ERLs are notoriously hard to commission, and the fact that they’ve managed eight-pass recovery using permanent magnets is quite a feat,” says Ryan Bodenstein, an accelerator physicist at the Belgian Nuclear Research Centre. “I’m really quite excited about this breakthrough.”

The European, Japanese, and American particle-physics communities are deliberating what future accelerators to fund. CBETA’s success may cause them to take another look at ERLs—which, thanks to their smaller size and power savings, reduce costs. Some future experiments, such as an electron-ion collider to be built at Brookhaven, will use ERLs.

ERLs still face challenges, though. There are questions about whether the handoff would go as smoothly in a real collider: Smashing beams of electrons with ions or other particles could throw off the timing of the sensitive energy handoff. Design complications could take years to smooth out.

“I think the ideas should be pursued and investigated further,” says Bodenstein. “And even if it doesn’t really work out in this case, I think it will provide some great insights.”

This article appears in the March 2020 print issue as “New Particle-Accelerating Tech Passes Test.”

Survey Says: Tech Jobs Are the Best Jobs

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/tech-jobs-best-jobs

What makes a job a really good job? Job search site Indeed defines it as a combination of salary, demand as represented by the share of job postings, and growth in the number of job postings for a particular title.

By that definition, tech jobs have generally done well. For the past few years Indeed has used these factors to rank a broad range of careers in the U.S., including doctors, lawyers, and realtors, as long as the average salary is at least $75,000 and the site sees 20 job postings per million jobs in its database.

In Indeed’s rankings, jobs in the tech category claimed five of the top 10 slots in 2018 and three in 2019. This year, however, tech jobs claimed a whopping seven of the top ten slots, pushing out all other professions except real estate agent, dentist, and sales director.

Indeed’s data confirms job reviews site Glassdoor’s 2020 list of top jobs, which also had seven tech jobs in the top ten, though a slightly different seven. In that ranking, that included job satisfaction among its factors, front-end engineer came out on top.

What are Indeed’s great tech jobs? Software architect came out on top, driven by demand. Full stack developer came in second, driven by the growth in the number of job postings. Dentists and doctors, however, still top the average salary charts. The 2020 top ten are listed in the table below.

Top 10 Jobs in 2020

RankJob TitleAverage Base Salary (2019)Number of postings per 1 million jobs posted (2019)Percent growth in number of postings (2016-2019)
1Software Architect$119,7151,42418.64%
2Full Stack Developer$94,164893161.98%
3Real Estate Agent$90,439675157.08%
4Dentist$184,58667431.69%
5Development Operations Engineer$108,76163569.72%
6Electrical Engineer$79,84263220.79%
7Java Developer$93,82061811.14%
8Data Scientist$105,51061577.57%
9IT Security Specialist$94,98456212.47%
10Sales Director$77,81455616.75%

Source: Indeed

 

Gen X Performance Tuning

Post Syndicated from Sung Park original https://blog.cloudflare.com/gen-x-performance-tuning/

Gen X Performance Tuning

Gen X Performance Tuning

We are using AMD 2nd Gen EPYC 7642 for our tenth generation “Gen X” servers. We found many aspects of this processor compelling such as its increase in performance due to its frequency bump and cache-to-core ratio. We have partnered with AMD to get the best performance out of this processor and today, we are highlighting our tuning efforts that led to an additional 6% performance.

Gen X Performance Tuning

Thermal Design Power & Dynamic Power

Thermal design power (TDP) and dynamic power, amongst others, play a critical role when tuning a system. Many share a common belief that thermal design power is the maximum or average power drawn by the processor. The 48-core AMD EPYC 7642 has a TDP rating of 225W which is just as high as the 64-core AMD EPYC 7742. It comes to mind that fewer cores should translate into lower power consumption, so why is the AMD EPYC 7642 expected to draw just as much power as the AMD EPYC 7742?

Gen X Performance Tuning
TDP Comparison between the EPYC 7642, EPYC 7742 and top-end EPYC 7H12

Let’s take a step back and understand that TDP does not always mean the maximum or average power that the processor will draw. At a glance, TDP may provide a good estimate about the processor’s power draw; TDP is really about how much heat the processor is expected to generate. TDP should be used as a guideline for designing cooling solutions with appropriate thermal capacitance. The cooling solution is expected to indefinitely dissipate heat up to the TDP, in turn, this can help the chip designers determine a power budget and create new processors around that constraint. In the case of the AMD EPYC 7642, the extra power budget was spent on retaining all of its 256 MiB L3 cache and the cores to operate at a higher sustained frequency as needed during our peak hours.

The overall power drawn by the processor depends on many different factors; dynamic or active power establishes the relationship between power and frequency. Dynamic power is a function of capacitance – the chip itself, supply voltage, frequency of the processor, and activity factor. The activity factor is dependent on the characteristics of the workloads running on the processor. Different workloads will have different characteristics. Some examples include Cloudflare Workers or Cloudflare for Teams, the hotspots in these or any other particular programs will utilize different parts of the processor, affecting the activity factor.

Gen X Performance Tuning

Determinism Modes & Configurable TDP

The latest AMD processors continue to implement AMD Precision Boost which opportunistically allows the processor to operate at a higher frequency than its base frequency. How much higher can the frequency go depends on many different factors such as electrical, thermal, and power limitations.

AMD offers a knob known as determinism modes. This knob affects power and frequency, placing emphasis one over the other depending on the determinism selected. There is a white paper posted on AMD’s website that goes into the nuanced details of determinism, and remembering how frequency and power are related – this was the simplest definition I took away from the paper:

Performance Determinism – Power is a function of frequency.

Power Determinism – Frequency is a function of power.

Another knob that is available to us is Configurable Thermal Design Power (cTDP), this knob allows the end-user to reconfigure the factory default thermal design power. The AMD EPYC 7642 is rated at 225W; however, we have been given guidance from AMD that this particular part can be reconfigured up to 240W. As mentioned previously, the cooling solution must support up to the TDP to avoid throttling. We have also done our due diligence and tested that our cooling solution can reliably dissipate up to 240W, even at higher ambient temperatures.

We gave these two knobs a try and got the results shown in the figures below using 10 KiB web assets over HTTPS provided by our performance team over a sustained period of time to properly heat up the processor. The figures are broken down into the average operating frequencies across all 48 cores, total package power drawn from the socket, and the highest reported die temperature out of the 8 dies that are laid out on the AMD EPYC 7642. Each figure will compare the results we obtained from power and performance determinism, and finally, cTDP at 240W using power determinism.

Performance determinism clearly emphasized stabilizing operating frequency by matching its frequency to its lowest performing core over time. This mode appears to be ideal for tuners that emphasizes predictability over maximizing the processor’s operating frequency. In other words, the number of cycles that the processor has at its disposal every second should be predictable. This is useful if two or more cores share data dependencies. By allowing the cores to work in unison; this can prevent one core stalling another. Power determinism on the other hand, maximized its power and frequency as much as it can.

Gen X Performance Tuning
Gen X Performance Tuning

Heat generated by power can be compounded even further by ambient temperature. All components should stay within safe operating temperature at all times as specified by the vendor’s datasheet. If unsure, please reach out to your vendor as soon as possible. We put the AMD EPYC 7642 through several thermal tests and determined that it will operate within safe operating temperatures. Before diving into the figure below, it is important to mention that our fan speed ramped up over time, preventing the processor from reaching critical temperature; nevertheless, this meant that our cooling solution worked as intended – a combination of fans and a heatsink rated for 240W. We have yet to see the processors throttle in production. The figure below shows the highest temperature of the 8 dies laid out on the AMD EPYC 7642.

Gen X Performance Tuning

An unexpected byproduct of performance determinism was frequency jitter. It took a longer time for performance determinism to achieve steady-state frequency, which contradicted predictable performance that performance determinism mode was meant to achieve.

Gen X Performance Tuning

Finally, here are the deltas out from the real world with performance determinism and TDP set to factory default of 225W as the baseline. Deltas under 10% can be influenced by a wide variety of factors especially in production, however, none of our data showed a negative trend. Here are the averages.

Gen X Performance Tuning

Nodes Per Socket (NPS)

The AMD EPYC 7642 physically lays out its 8 dies across 4 different quadrants on a single package. By having such a layout, AMD supports dividing its dies into NUMA domains and this feature is called Nodes Per Socket or NPS. Available NPS options differ model-to-model, the AMD EPYC 7642 supports 4, 2, and 1 node(s) per socket. We thought it might be worth our time exploring this option despite the fact that none of these choices will end up yielding a shared last level cache. We did not observe any significant deltas from the NPS options in terms of performance.

Gen X Performance Tuning
Gen X Performance Tuning

Conclusion

Tuning the processor to yield an additional 6% throughput or requests per second out of the box has been a great start. Some parts will have more rooms for improvement than others.  In production we were able to achieve an additional 2% requests per seconds with power determinism, and we achieved another 4% by reconfiguring the TDP to 240W. We will continue to work with AMD as well as internal teams within Cloudflare to identify additional areas of improvement. If you like the idea of supercharging the Internet on behalf of our customers, then come join us.

Евродепутати с критика и остри въпроси за ЧСИ-та към Данаил Кирилов

Post Syndicated from Николай Марченко original https://bivol.bg/mepe-chsi-kirilov.html

четвъртък 27 февруари 2020


„Мога да ви уверя, че по време на срещите ни с министъра на правосъдието, Камарата на ЧСИ в България и с други институции, сме поставили остро съответните въпроси, които повдигат петиционерите в жалбите си. Не сме ги неглижирали, бъдете сигурни, че ще има резултат за българските граждани”. Това коментира специално за „Биволъ” евродепутатът от Румъния Кристиан Терхеш, който като на член на Комисията по жалби и петиции (КЖП) в Европейския парламент (ЕП) посети България заедно с тримата си колеги от Испания, Румъния и Латвия.     

Разследването на “Биволъ” от 15 януари 2020 г.

Както пръв написа „Биволъ” на 15 януари 2020 г., специалната мисия е сформирана по повод 15-те петиции на Сдружението на потърпевшите от частните съдебни изпълнители и съдебната система „Солидарност”. “Частният съдебен изпълнител около Момчил Мондешки – Стоян Якимов, а също така близкият до ГЕРБ и ДПС в Хасково негов колега Делчо Пехливанов се занимават с незаконна търговска дейност: имотите на длъжниците се изкупуват от близки фирми”, гласеше материалът на „Биволъ” „Комисия от Евроапрламента идва у нас заради произвола на ЧСИ”

Трябваше ли да пада мониторингът?

Кристиан Терхеш притежава солиден професионален опит във финансово-корпоративния сектор на Румъния и САЩ, преди да бъде избран за евродепутат и е добре запознат с механизмите на работа на банките и кредитните институции.

По думите му в голяма степен едни и същи са проблемите на кредитния пазар в България и Румъния. “Но в Румъния не само върху банките, но и върху други финансовoкредитни институции, които предоставят средства на хората: по-силна е тази част на supervision, на надзор, за да се гарантира, че договорите нямат толкова неравноправни клаузи”, каза още румънският евродепутат.

Кристиан Терхеш (Снимка: Биволъ)

Той изрази възмущението си от това, че Механизмът за наблюдение и контрол върху съдебната система за България и Румъния е паднал, преди такива остри проблеми като неравноправните клаузи за кредитополучатели да бъдат окончателно решени.

„Защо всичките тези комисии, които се занимаваха с мониторинга не обърнаха внимание през годините, от 2006 г., а след това и от 2007 г. насам, когато България и Румъния станаха членове на Европейския съюз?”.

„Механизмът за наблюдение трябваше да помогне да се решат проблемите с кредитирането, тъй като тази ситуация засяга и правосъдието”, категоричен е Кристиан Терхеш.

Той призова НПО-тата и медиите в България по-активно да подават сигнали в европейските институции. „И искам да поздравя българските граждани за това, че се обърнаха към Комисията по петиции. Това е важно за всички граждани на Европа – когато се сблъскват с такива проблеми в обществото и не могат да получат решение на национално ниво”, обясни Кристиан Терхеш.

Според него така проблемът е стигнал до нивото на ЕП. „И както виждате, е сформирана тази мисия. И за визитата ни ще подготвим обстоен доклад, в който ще включим и препоръки затова какво може да бъде направено. И не само от българските власти. А и от банките”.

„От заинтересованите страни получихме фийдбек, всички необходими данни, за да можем да изработим съответните препоръки”, обобщи европейският депутат от Румъния.

„Конструктивна критика”

Председателката на КЖП към ЕП (бивш министър на здравеопазването, социалните услуги и равенството на Кралство Испания между 2016 и 2018 г.), Долорс Монтсерат пред българските медии в Дома на Европа отказа подробности за срещите с министрите Данаил Кирилов и Емил Караниколов, подуправителя на Българска Народна банка (БНБ), Асоциацията на банките и Камарата на ЧСИ-тата (КЧСИ) у нас.

„Но трябва да отбележа, че имаше място и за конструктивна критика от всички страни. На която бе обърнато огромно внимание…”

Така коментира тя, запитана от „Биволъ” обсъденото с Данаил Кирилов и КЧСИ, които отричат проблемите  с длъжниците.

Долорс Монтсерат с колегите в Дома на Европа (Снимка: Биволъ)

„Не е необходимо да се спираме върху всичките проведени срещи. Но за да се подобри ситуацията, трябва да има повече контруктивна самокритика от всички засегнати страни”, смята Долорс Монтсерат.

Тя все пак призна, че „има известен дисбаланс в отношенията между кредиторите и кредитополучателите в България”. Евродепутатката увери, че работата на мисията „ще продължи в Брюксел”, където ще бъде подготвен съответен доклад за визитата в София с прилагане на препоръки. „Това ще може би 2 – 3 месеца, но ще направим всичко възможно да го представим максимално бързо”, каза испанската европарламентаристка.

Шефката на КЖП увери, че в София е получила „цялата информация”. „Мога да ви уверя, че за тези три дни сме станали едва ли не експерти в областта на кредитирането, тръгваме от България с голям обем документация по темата”, каза Монтсерат.

Депутатката отчете, че петициите относно неправноправните клаузи в потребителските и ипотечните кредити „са десетки”:

„Направихме оценка на постъпилите жалби и се опитахме да проверим всички документи”.

Също така Долорс Монсерат припомни, че „правомощията по законодателсната дейност е на българското Народно събрание”: „Но с колегите ще работим за изработване на препоръки, които да бъдат гласувани от Комисията по петиции”. „Надяваме се дадем решения на българските инситуции и граждани”, каза евродепутатката.

Също така Долорс Монтсерат заяви, че законодателните промени, които властите в София са започнали да въвеждат (най-вероятно има прредвид Гражданско-процесуалния коденск – б. ред.) е „реформа в правилната посока”.

„Отчетохме усилията на българското правителство, с цел да намалят дисбаланса между кредиторите и кредитополучателите, но все още има какво да се желае”.

Данаил Кирилов: Това е изкуствено създаден проблем

Министърът на правосъдието Данаил Кирилов (Снимка: Dnes.bg)

Според Ивайло Илиев от УС на „Солидарност” срещата с петиционерите на 25 февруари в Дома на Европа е продължила над 3 часа и евродепутатите са получили информация и за т.нар. „Тъмна стая” (вижте документите в сайта на „Солидарност”).

“От 2009 г. в Софийския районен съд се намира т.нар. „Тъмна стая” със 150 000 неприключени съдебни дела. Това е незаконен склад за дела, по които съдиите не са връчили заповедното производство на длъжника, за да може той да се защити”, поясни тезата си Ивайло Илиев. Според него изпълнителните листове по тези дела незаконно напускат кориците.

Пресслужбата на Министерството на правосъдието не съобщи за срещата на шефа на ведомството Данаил Кирилов с евродепутатите. Но на 20 февруари за агенция Фокус той коментира протестите на „Солидарност” пред министерството.

В интервюто той казва, че щял „да представи позицията си във връзка с Директивата по защита на потребителските права и длъжниците” пред членовете на ЕП. Министърът също така обвини българските евродепутати Радан Кънев, Ангел Джамбазки, Андрей Слабаков и Цветелина Пенкова, че не са се застъпили за България по темата ЧСИ. „Разчитам на правилна преценка по този изкуствено създаден проблем. Съжалявам, че наши български евродепутати са се подвели и, без да познават задълбочено както законодателната уредба и практиката по нейното прилагане, не са защитили българската позиция по време на дебатите в PETI (КЖП) на 5 септември 2019 г.“, категоричен бе Данаил Кирилов.

Камарата на ЧСИ няма проблеми със съдебната власт (снимка: News.bg)

Той очаквано отказа и да подава оставка по искане на „Солидарност”.

„Твърди се, че сме бездействали при измененията на ГПК по отношение на изпълнителното производство. Всъщност е точно обратното“.

Данаил Кирилов твърди, че след промените, инициирани от ексомбудсмана Мая Манолова, гражданите били „защитени от неправомерни действия на ЧСИ-тата”.

„Направихме пределно възможната балансирана защита на длъжниците в изпълнителното производство. Променихме правилата за уведомяване, възможността за обжалване на актовете. Сега не може да се случи изпълнително производство без длъжникът да е надлежно уведомен двукратно. Първо с уведомление за постановена заповед за изпълнение, след това – с покана за доброволно изпълнение. Длъжникът има възможност да възрази, в който случай се преминава в класическото спорно исково производство. Разширихме възможността да се обжалват действията на ЧСИ-тата. Коригирахме и таксите на ЧСИ за малките дългове, за да не може да се злоупотребява с натоварване на длъжници с допълнителни такси“, твърди Данаил Кирилов.

Ивайло Илиев (Снимка: Frognews)

Относно данните на основателя на „Солидарност” Ивайло Илиев, правосъдният министър го обвини, че просто дължал над 7 млн: „И смята чрез обществена реакция да решат въпроса със задълженията си, да уреди неуредени частни отношения“.

Самият основател на „Солидарност” отговори на министъра, че с удоволствие би си уредил споровете по кредитите, ако е имал достъп до заповедното си производство от „Тъмната стая”.

Но към „Солидарност” се присъединяват все повече петиционери, в т.ч. чуждестранни предприемачи (от Румъния и САЩ), сблъскали се челно с ЧСИ-тата в България.

 

Запис уличава кмета на Раднево Теньо Тенев, че е знаел за злоупотребите в общинската болница, но си е мълчал.

Post Syndicated from Димитър Стоянов original https://bivol.bg/radnevo-zapis-tenyo-tenev-bolnica.html

четвъртък 27 февруари 2020


Кметът на Раднево Теньо Тенев е знаел за злоупотребите в общинската болница в Раднево, за това свидетелства аудио запис от времето преди местните избори от 2019 г., изпратен до Биволъ. В разговор със зам. кмета по финансите Димитър Желев, който се случва пред трети лица, Тенев заявява:

ТТ: Преди изборите да тръгна да кажа, к‘во? Да закриваме градините? Оставате… чак толкова не съм очаквал, ама…
ДЖ Кметът ако го каже пред всички решението, той е ужасяващият виновник! Един човек може ли да носи целия кръст?

ТТ: Начи ние тези неща ги знаем от две години! Един месец говорим за болницата, целият град знае! Вие да сте видели в социалните мрежи писано, да се видели някакво напрежение в хората? (..не се разбира) общината… Тоест за какво говорим? Че един кръг хора, включително и наши приятели, те гледат да вземат едни хубави заплати. Ся Румен не знам колко взема, не съм го питал. Жена му колко взема? Знам Владо колко взема, знам Росен, знам Деан Динев.


Въпросният Румен от записа вероятно е Румен Йовчев, председател на ГЕРБ Раднево, председател на Общинския съвет в Раднево и медик по професия. Той вече няколко мандата е част от Общинския съвет в тракийския град, но въпреки това, според запознати продължава да работи на 4 часа в болницата. Съпругата му Светла Желязкова, която също е споменатата от кмета Тенев, е лекар уши-нос-гърло в общинската болница. Владо, вероятно е д-р Владо Желев – хирург от приятелския кръг на кмета, Йовчев и Желязкова. Деан Динев, който е споменат с име и фамилия също е част от приятелския кръг на кмета.

Скандалът с общинската болница в Раднево избухна преди дни, когато директорът на на лечебното заведение д-р Мая Узунова беше задържана при акция на ДАНС и прокуратурата. Заедно с нея бяха задържани още няколко служители, сред които секретарката ѝ, шофьора ѝ и главната счетоводителка на болницата.

На 24 февруари Окръжният съд в Стара Загора наложи мярка за неотклонение задържане под стража на Узунова и двама от колегите й. Само на главния счетоводител на болницата беше наложена мярка за неотклонение домашен арест. Узунова и колегите ѝ са обвинение в длъжностни престъпления, сключване на фиктивни договори с цел източване и пране на пари.

Сключването на фиктивни договори не е прецедент в общинските дружества в Раднево, Биволъ вече ви разказа как общинският футболен клуб е бил източван именно с фиктивни договори.

Разследващите работят по хипотезата, че са раздавани заплати от по няколко хиляди лева на служители, които не са нито лекари, нито медицински персонал. В същото време друга част от служителите не са вземали заплати от около 3 месеца. Задълженията на болницата са между 1,4 и 1,6 милиона лева. От тях по данни на бТВ около 800 000 са задължения за заплати и осигуровки.

Сигналът за злоупотребите в лечебното заведение е подаден преди около седмица от кмета на Раднево Тенчо Тенев. Според запознати той е бил принуден да го подаде от 9 общински съветници, които са блокирали гласуването на общинския бюджет, заради нередностите в болницата. Тенев кара вече втори мандат. Преди това е зам. кмет на д-р Юлиан Илчев. През цялото време Тенев е излъчван от листата на ГЕРБ.

Скандална подробност е, че докато той управлява община Раднево, съпругата му Светла Тенева е кметски наместник в село Рисиманово, което се намира в същата община. При предходния му мандат Тенев заварва съпругата си на този пост и я оставя да управлява населеното място. След последните местни избори ситуацията се повтаря въпреки, че за никого не е ясно как Тенев упражнява пряк контрол върху дейността на съпругата си в Рисиманово, Комисията за конфликт на интереси не откри такъв порок в спорната ситуация. На практика никой не знае кой кого командва, кметът съпругата си или обратното.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close