За Infowars, Първата поправка и свободата от отговорност

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/08/19/infowars/

Според Първата поправка на Конституцията на САЩ  Конгресът  не може да приема закони, ограничаващи свободата на  изразяване. Конституционна норма от 1789 г. Цял отрасъл от законодателството на САЩ условно се нарича законодателство по Първата поправка. Освен другото,  тези закони конкретизират пределите на свободата на изразяване. Как точно да се прилага нормата показва и практиката на FCC (Федералната комисия за комуникации) и съдебната практика. При това законодателството и съдебната практика са в динамика и търпят непрекъснати изменения.

През последните седмици има ново развитие, което – каквато и посока да вземе – ще се отрази върху стандартите за свобода на изразяване, смятат американските медии. Главен герой е Алекс Джоунс – човек, занимаващ се с медии – не точно журналист, а източник на дезинформация и конспиративни теории със собствен сайт Infowars и многобройни аудио – и видеоемисии онлайн.

С различна скорост / различна интензивност/ различен инструментариум/ различни обявени публично основания Джоунс и свързано с него съдържание бяха отстранени – съответно достъпът прекратен –  от Facebook, YouTube, LinkedIn, Apple, Spotify, Pinterest, Vimeo  и др., като по по-особен начин стои въпросът с Twitter и Джак Дорси, който най-продължително издава противоречиви сигнали, но все пак временно Джоунс и Infowars не могат да изпращат съобщения и в тази мрежа.

Няма съмнение, събитията, свързани с Infowars,  бележат важен момент за оценка на  правилата в интернет, техния произход, прилагане и ефективност.

Още в началото на каскадата от мерки станаха ясни две неща:

1. Първата поправка не дава право на никого да публикува на ничия платформа. Мерките срещу Алекс Джоунс не са  в обхвата на защитата на Първата поправка, защото тя се отнася до сдържането на държавата.

2. Частните компании  имат непоследователна и неконсистентна реакция, макар да твърдят обратното. Според Ню Йорк Таймс  реакцията на Facebook едновременно се подчинява на следните стандарти:

  •  Facebook е дълбоко ангажирана със свободата на словото и ще позволи на хората да публикуват почти всичко, включително дори да отрекат Холокоста.
  • Освен ако отказът от Холокоста е реч на омразата, в този случай компанията може да прекрати достъпа.
  • Ако дадена публикация съдържа фактическа неточност, тя няма да бъде отстранена, но може да бъде показана на много малко хора, като по този начин се намали   нейното въздействие.
  • От друга страна, ако дезинформацията е определена като подбуждаща непосредствено насилие, Facebook ще я премахне – дори и да не е реч  на омразата.
  • И в същото време  ако даден сайт лъже многократно, разпространява конспиративни теории   или дори подбужда насилие, той може да  присъства във Facebook, защото в крайна сметка няма  лъжа, която да е основание да ви изхвърлят.

Но u много други въпроси не са получили отговор. Както сполучливо пита New Statesman: free speech –  или consequence-free speech?

Какво да бъде: свободно слово  –   или слово без отговорност? Този разговор тепърва има да се води.

Намерете си удобно място между техноутопията и технопаниката.

Two rounds of stable kernels released

Post Syndicated from jake original https://lwn.net/Articles/762938/rss

Greg Kroah-Hartman has released two batches of stable kernels. The first
set has fixes in various parts of the tree, while the second batch has a
single fix for a problem
with the page-table entry inversion
that is done as a mitigation for the L1TF speculative-execution
vulnerability
. The first batch includes: 4.18.2, 4.17.16, 4.14.64, 4.9.121, 4.4.149, and 3.18.119. The second batch is: 4.18.3, 4.17.17, 4.14.65, 4.9.122, and 4.4.150. Users should upgrade, presumably to
something in the second batch unless they are running the 3.18 series.

За Търговския регистър

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2154

Вече седмица държавата ни си няма Търговски регистър.

Информацията за причините е, да го кажем най-учтиво, противоречива. Нямало е техническа повреда, само че няколко дискови масива са си отишли. Данните не са били изгубени, но основният им масив е станал недостъпен. Основният масив данни е изгубен, но е имало бекъпи. Имало е бекъпи, но не могат да бъдат възстановени… Лично на мен едно птиченце ми каза, че всичката информация в Търговския регистър е унищожена, не поради злонамереност а поради технически провал, и в момента бива събирана „отдругаде“. (Не са ми казали откъде, но предполагам, че от други държавни институции и/или от фирми, които са купували от ТР тази информация. А е и без значение – от значение е, че един от най-ключовите масиви информация в България е унищожен от създадената да го поддържа институция поради некадърност.)

Ако това е вярно, е безсмислено да очакваме фирмената информация в ТР да бъде възстановена по-рано от понеделник. (Не е ясно кой понеделник – вероятно някой скорошен, но все пак. 25 терабайта, или колкото там са данните, няма как да се изкопират за час-два в реалните ни условия.) По-интересният въпрос обаче е как се е случило така, че тази информация е изгубена.

За мен този въпрос е загадка. Вярно е, че значителна част от поръчките за техническата поддръжка и оборудването на регистъра са „спечелени“ (знаем как в България става това) от фирми с основна дейност фризьорство и туризъм. Знаем и че в България се назначават свои, след което им се възлагат задачи като за умни – сигурна рецепта за провал. Но все пак недоумявам как поне един там не осъзна накъде вървят технически, докато не е станало късно. Това означава, че некомпетентността, безхаберието, безконтролността и безнаказаността са абсолютни, пълни и стопроцентови. И че персоналът на Регистъра – за чистачките и служителките по гишетата не знам, но ръководството твърдо – е редно да се лустрира с доживотна забрана за заемане на държавни или общински длъжности. Диагнозата „особено опасен некадърник“ не е от вчера.

За последните години Регистърът е похарчил над 6 милиона лева за техническа поддръжка и снабдяване с ИТ оборудване. Моята ситна и смешна фирма би се наела да изгради основната инфраструктура, която да съхранява и гарантира наличността на техните данни, за под 60 000 лева. (Не, това не е предложение – знам кога и стотинка ще бъде дадена от днешните властници на някой, който не е „чичов“. Просто се опитвам да подкрепя с опита си илюстрацията колко мои и ваши пари са пропилени – иначе казано, изкрадени. Над сто пъти повече от необходимото.) Изглежда, че да откраднат 99% от тези пари им е било малко, и са откраднали и последния 1%. Иначе просто сега нямаше да се случи каквото се случи.

В момента разследвали. Щели да кажат кои са виновните. Щели да направят централно хранилище за данни, че да не се губели… Абе, аланкоолу, ако Регистърът даваше по предназначение дори само 1% от отпуснатите му за ИТ оборудване средства, нужда от централно хранилище за данни няма да има – данните няма да се губят при никакви реални обстоятелства!

При това положение кои са виновните е абсолютно ясно. Тези, които сега ни обещават да похарчат още пари. За да се случи нещо различно от това, което се случва, когато те харчат пари… Само на мен ли това ми звучи, хм, неубедително?

Ако не, значи кои са виновните наистина е ясно. Не, не тези. Забравили сте приказката за лудия и зелника… Виновните сме ние – вие и аз. И сме виновни с това, че се оправдаваме, вместо да действаме.

Какво можем да действаме ли? Като начало, да направим медии, които да не са задкулисна собственост на все същия „успял млад мъж“. Да откъснем от собствените си постни залъци, за да бъдат те свестни и стабилни – спонсорира ли ги друг, ще служат на него, а не на нас. Да се грижим те да казват истината, каквато е, независимо колко смелост иска това. Да работят от чужбина, ако трябва (нищо чудно да трябва). Да показват всяка гледна точка, да се опитват да бъдат обективни, да заслужат доверието на хората. И да се погрижим тези медии да стигнат до всеки българин, за да може той да чуе истината и да го убедят, че тя е това, а не каквото професионалните лъжци му наливат в ушите. Истината е, която може да го направи свободен.

Да, после ще има още много работа. Създаване на некорумпирани и некукловодени партии, застъпване за тях, финансиране за тях пак от полупразните ни джобове, борба за регистрирането им, утвърждаването им, за спечелване на хората за тях. И после непрекъснатото им контролиране с желязна ръка, и заставяне да се чистят от всеки промъкнал се корупционер или изкушен духовен бедняк. И да правят промените, които малко по малко ще започнат да ни вадят от блатото – срещу обединената, отлично координирана и подплатена с много милиарди съпротива на тези, които ни вкараха и държат в това блато, за да сме техният добитък. И които притежават цялата ни държава, включително нас… Да, трудно и бавно е. Никой не твърди, че еволюцията от добитък към хора е лесна и бърза – освен измамниците, естествено.

Първата стъпка обаче са медиите. Слава богу, живеем във времена на Интернет. Няма как шепичка клептократи да забрани достъпа на обикновените хора до печатниците и радиото и с това да пресече всички възможности за разпространение на информация. Можем да ги създадем.

Да, и заради националните ни особености ще ни е много трудно. Който не се е опитвал да пробуди българите да се борят за свободата си, той не знае какво значи онова „Народе????“ в тефтерчето на Левски.

Но въпреки всичко трябва да пробваме. Не успеем ли, ще се затрием като народ – ще се спасяваме поединично в емиграция, бягайки всъщност от оставащите тук българи, докато последният не дръпне шалтера.

Трети вариант няма. Избираме сами – с действията или бездействието си.

Help Wanted: Senior Staff Accountant

Post Syndicated from Yev original https://www.backblaze.com/blog/help-wanted-senior-staff-accountant/

Want to work at a company that helps customers in 156 countries around the world protect the memories they hold dear? A company that stores over 500 petabytes of customers’ photos, music, documents and work files in a purpose-built cloud storage system?

Well here’s your chance. Backblaze is looking for a Senior Staff Accountant!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 — robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple — grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

Senior Staff Accountant Responsibilities:

  • Accurately prepare complex accruals, journal entries and balance sheet reconciliations as part of the monthly, quarterly and annual close process.
  • Ability to research and apply fundamental accounting theories and concepts under US GAAP.
  • Maintain fixed asset ledger, which includes interacting with equipment leasing companies, facilitating leasing documentation and interactions with various departments.
  • Conduct periodic physical inventory counts working with Operations.
  • Prepare schedules and documentation for external audits, internal control audits and various other regulatory audits.
  • Identify and implement process improvements to help reduce time to close, improve upon accuracy of underlying accounting records and enhance internal controls.
  • Perform other duties and special projects as assigned.

Qualifications:

  • Bachelor Degree in Accounting or Finance; CPA highly preferred.
  • 5+ years relevant accounting experience.
  • Big 4 experience is a plus.
  • Knowledge of inventory and cycle counting preferred.
  • Prior experience working with Quickbooks, Excel, Word experience desired.
  • Positive work ethic, strong analytical and organizational skills with a high level of attention to detail.
  • Excellent interpersonal skills and ability to work effectively across functional areas in a collaborative environment.
  • Demonstrated ability to thrive in a dynamic and fast-paced environment.

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your work history.
  3. Include your resume.

The post Help Wanted: Senior Staff Accountant appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Debian turns 25

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/debian-turns-25/

Want to feel old? Debian, the popular free Unix-like operating system based on the Linux kernel and GNU userland, is turning 25. Composed entirely of free software, Debian is maintained and packaged entirely by volunteers. Announced to the world by Ian Murdock 25 years ago this week, the first internal release, Debian 0.01, took place in September 1993, followed in June 1996 by a first stable version, Debian 1.1 (code name ‘Bo’).

The following two decades have seen eight further major releases, the most recent being Debian 9.0 (code name ‘Stretch’), released in June 2017.

Raspbian

Raspberry Pi owes a considerable debt to the Debian project. Our operating system images are built on top of Raspbian Stretch, which is a community-led rebuild of Debian Stretch, optimised for the specific ARM cores used in our products.

The Raspberry Pi desktop environment

In addition to the core Debian system, we bundle a variety of useful non-Debian software. Some packages, like Simon’s UI mods, and the Chromium web browser, are free as in speech. Others, like Wolfram Mathematica and Minecraft, are free as in beer.

Our most recent release adds more usability features, including a post-install wizard to simplify the setup process for new users.

Download Raspbian today!

If you’ve yet to try Raspbian on your Raspberry Pi, you can download it here. This tutorial from The MagPi demonstrates how to write an image onto a fresh SD card:

Use Etcher to install operating systems onto an SD card

Lucy Hattersley shows you how to install Raspberry Pi operating systems such as Raspbian onto an SD card, using the excellent Etcher. For more tutorials, check out The MagPi at http://magpi.cc ! Don’t want to miss an issue? Subscribe, and get every issue delivered straight to your door.

And those of you who are already using Raspbian, be sure to check you have the most up-to-date version by following this easy video tutorial:

Updating Raspbian on your Raspberry Pi || Raspberry Pi Foundation

How to update to the latest version of Raspbian on your Raspberry Pi. Download Raspbian here: https://www.raspberrypi.org/downloads/raspbian/ More informatio…

Don’t have a Raspberry Pi? Don’t worry: we also make a version of our operating system, based on x86 Debian, that will run on your PC or Mac! With an x86-based computer running our Debian Stretch OS, you can also use the PiServer tool to control a fleet of Raspberry Pis without SD cards.

The post Debian turns 25 appeared first on Raspberry Pi.

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/762914/rss

Security updates have been issued by Debian (intel-microcode, keystone, php-horde-image, and xen), Fedora (rsyslog), openSUSE (apache2, clamav, kernel, php7, qemu, samba, and Security), Oracle (mariadb and qemu-kvm), Red Hat (docker, mariadb, and qemu-kvm), Scientific Linux (mariadb and qemu-kvm), SUSE (GraphicsMagick, kernel, kgraft, mutt, perl-Archive-Zip, python, and xen), and Ubuntu (postgresql-10, postgresql-9.3, postgresql-9.5, procps, and webkit2gtk).

How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-automate-import-third-party-threat-intelligence-feeds-into-amazon-guardduty/

Amazon GuardDuty is an AWS threat detection service that helps protect your AWS accounts and workloads by continuously monitoring them for malicious and unauthorized behavior. You can enable Amazon GuardDuty through the AWS Management Console with one click. It analyzes billions of events across your AWS accounts and uses machine learning to detect anomalies in account and workload activity. Then it references integrated threat intelligence feeds to identify suspected attackers. Within an AWS region, GuardDuty processes data from AWS CloudTrail Logs, Amazon Virtual Private Cloud (VPC) Flow Logs, and Domain Name System (DNS) Logs. All log data is encrypted in transit. GuardDuty extracts various fields from the logs for profiling and anomaly detection and then discards the logs. GuardDuty’s threat intelligence findings are based on ingested threat feeds from AWS threat intelligence and from third-party vendors CrowdStrike and Proofpoint.

However, beyond these built-in threat feeds, you have two ways to customize your protection. Customization is useful if you need to enforce industry-specific threat feeds, such as those for the financial services or the healthcare space. The first customization option is to provide your own list of whitelisted IPs. The second is to generate findings based on third-party threat intelligence feeds that you own or have the rights to share and upload into GuardDuty. However, keeping the third party threat list ingested to GuardDuty up-to-date requires many manual steps. You would need to:

  • Authorize administrator access
  • Download the list from a third-party provider
  • Upload the generated file to the service
  • Replace outdated threat feeds

In the following blog we’ll show you how to automate these steps when using a third-party feed. We’ll leverage FireEye iSIGHT Threat Intelligence as an example of how to upload a feed you have licensed to GuardDuty, but this solution can also work with other threat intelligence feeds. If you deploy this solution with the default parameters, it builds the following environment:
 

Figure 1: Diagram of the solution environment

Figure 1: Diagram of the solution environment

The following resources are used in this solution:

  • An Amazon CloudWatch Event that periodically invokes an AWS Lambda function. By default, CloudWatch will invoke the function every six days, but you can alter this if you’d like.
  • An AWS Systems Manager Parameter Store that securely stores the public and private keys that you provide. These keys are required to download the threat feeds.
  • An AWS Lambda function that consists of a script that programmatically imports a licensed FireEye iSIGHT Threat Intelligence feed into Amazon GuardDuty.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following:
    1. GuardDuty, to list, create, obtain, and update threat lists.
    2. CloudWatch Logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload threat lists on Amazon S3 and ingest them to GuardDuty.
  • An Amazon Simple Storage Service (S3) bucket to store your threat lists. After the solution is deployed, the bucket is retained unless you delete it manually.
  • Amazon GuardDuty, which needs to be enabled in the same AWS region in which you want to deploy the solution.

    Note: It’s a security best practice to enable GuardDuty in all regions.

Deploy the solution

Once you’ve taken care of the prerequisites, follow these steps:

  1. Select the Launch Stack button to launch a CloudFormation stack in your account. It takes approximately 5 minutes for the CloudFormation stack to complete:
     
    Select this image to open a link that starts building the CloudFormation stack
  2. Notes: If you’ve invited other accounts to enable GuardDuty and become associated with your AWS account (such that you can view and manage their GuardDuty findings on their behalf), please run this solution from the master account. Find more information on managing master and member GuardDuty accounts here. Executing this solution from the master account ensures that Guard Duty reports findings from all the member accounts as well, using the imported threat list.

    The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar. This is because Guard Duty is a region specific service.

    The code is available on GitHub.

  3. On the Select Template page, select Next.
  4. On the Specify Details page, give your solution stack a name.
  5. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    Parameter Value Description
    Public Key <Requires input> FireEye iSIGHT Threat Intelligence public key.
    Private Key <Requires input> FireEye iSIGHT Threat Intelligence private key
    Days Requested 7 The maximum age (in days) of the threats you want to collect. (min 1 – max 30)
    Frequency 6 The number of days between executions – when the solution downloads a new threat feed (min 1 – max 29)
  6. Select Next.
  7. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like, and then select Next.
  8. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  9. To deploy the stack, select Create.

After approximately 5 minutes, the stack creation should be complete. You can verify this on the Events tab:
 

Figure 2: Check the status of the stack creation on the "Events" tab

Figure 2: Check the status of the stack creation on the “Events” tab

The Lambda function that updates your GuardDuty threat lists is invoked right after you provision the solution. It’s also set to run periodically to keep your environment updated. However, in scenarios that require faster updates to your threat intelligence lists, such as the discovery of a new Zero Day vulnerability, you can manually run the Lambda function to avoid waiting until the scheduled update event. To manually run the Lambda function, follow the steps described here to create and ingest the newly downloaded threat feeds into Amazon GuardDuty.

Summary

We’ve described how to deploy an automated solution that downloads the latest threat intelligence feeds you have licensed from a third-party provider such as FireEye. This solution provides a large amount of individual threat intelligence data for GuardDuty to process and report findings on. Furthermore, as newer threat feeds are published by FireEye (or the threat intelligence feed provider of your choice), they will be automatically ingested into GuardDuty.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuartDuty forum.

Want more AWS Security news? Follow us on Twitter.

New Ways to Track Internet Browsing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/new_ways_to_tra.html

Interesting research on web tracking: “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:

Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.

In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.

The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.

Three news articles. BoingBoing post.

Writing Big JSON Files With Jackson

Post Syndicated from Bozho original https://techblog.bozho.net/writing-big-json-files-with-jackson/

Sometimes you need to export a lot of data to JSON to a file. Maybe it’s “export all data to JSON”, or the GDPR “Right to portability”, where you effectively need to do the same.

And as with any big dataset, you can’t just fit it all in memory and write it to a file. It takes a while, it reads a lot of entries from the database and you need to be careful not to make such exports overload the entire system, or run out of memory.

Luckily, it’s fairly straightforward to do that, with a the help Jackson’s SequenceWriter and optionally of piped streams. Here’s how it would look like:

    private ObjectMapper jsonMapper = new ObjectMapper();
    private ExecutorService executorService = Executors.newFixedThreadPool(5);

    @Async
    public ListenableFuture<Boolean> export(UUID customerId) {
        try (PipedInputStream in = new PipedInputStream();
                PipedOutputStream pipedOut = new PipedOutputStream(in);
                GZIPOutputStream out = new GZIPOutputStream(pipedOut)) {
        
            Stopwatch stopwatch = Stopwatch.createStarted();

            ObjectWriter writer = jsonMapper.writer().withDefaultPrettyPrinter();

            try(SequenceWriter sequenceWriter = writer.writeValues(out)) {
                sequenceWriter.init(true);
            
                Future<?> storageFuture = executorService.submit(() ->
                       storageProvider.storeFile(getFilePath(customerId), in));

                int batchCounter = 0;
                while (true) {
                    List<Record> batch = readDatabaseBatch(batchCounter++);
                    for (Record record : batch) {
                        sequenceWriter.write(entry);
                    }
                }

                // wait for storing to complete
                storageFuture.get();
            }  

            logger.info("Exporting took {} seconds", stopwatch.stop().elapsed(TimeUnit.SECONDS));

            return AsyncResult.forValue(true);
        } catch (Exception ex) {
            logger.error("Failed to export data", ex);
            return AsyncResult.forValue(false);
        }
    }

The code does a few things:

  • Uses a SequenceWriter to continuously write records. It is initialized with an OutputStream, to which everything is written. This could be a simple FileOutputStream, or a piped stream as discussed below. Note that the naming here is a bit misleading – writeValues(out) sounds like you are instructing the writer to write something now; instead it configures it to use the particular stream later.
  • The SequenceWriter is initialized with true, which means “wrap in array”. You are writing many identical records, so they should represent an array in the final JSON.
  • Uses PipedOutputStream and PipedInputStream to link the SequenceWriter to a an InputStream which is then passed to a storage service. If we were explicitly working with files, there would be no need for that – simply passing a FileOutputStream would do. However, you may want to store the file differently, e.g. in Amazon S3, and there the putObject call requires an InputStream from which to read data and store it in S3. So, in effect, you are writing to an OutputStream which is directly written to an InputStream, which, when attampted to be read from, gets everything written to another OutputStream
  • Storing the file is invoked in a separate thread, so that writing to the file does not block the current thread, whose purpose is to read from the database. Again, this would not be needed if simple FileOutputStream was used.
  • The whole method is marked as @Async (spring) so that it doesn’t block execution – it gets invoked and finishes when ready (using an internal Spring executor service with a limited thread pool)
  • The database batch reading code is not shown here, as it varies depending on the database. The point is, you should fetch your data in batches, rather than SELECT * FROM X.
  • The OutputStream is wrapped in a GZIPOutputStream, as text files like JSON with repetitive elements benefit significantly from compression

The main work is done by Jackson’s SequenceWriter, and the (kind of obvious) point to take home is – don’t assume your data will fit in memory. It almost never does, so do everything in batches and incremental writes.

The post Writing Big JSON Files With Jackson appeared first on Bozho's tech blog.

[$] The first half of the 4.19 merge window

Post Syndicated from corbet original https://lwn.net/Articles/762566/rss

As of this writing, Linus Torvalds has pulled just over 7,600 non-merge
changesets into the mainline repository for the 4.19 development cycle.
4.19 thus seems to be off to a faster-than-usual start, perhaps because the
one-week delay in the opening of the merge window gave subsystem
maintainers a bit more time to get ready. There is, as usual, a lot of
interesting new code finding its way into the kernel, along with the usual
stream of fixes and cleanups.

timeShift(GrafanaBuzz, 1w) Issue 57

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/08/17/timeshiftgrafanabuzz-1w-issue-57/

Welcome to TimeShift

August is flying by, but hopefully there’ll still be time to enjoy a few more summer evenings. This week we’re sharing the video demoing the new Explore UI in Grafana from last week’s PromCon, monitoring VMWare’s VKE with Prometheus, hosting a blog on a budget and more.

Come across an article you think might be a good fit for an upcoming issue? Suggestions for new content? Contact us.


Latest Stable Release: Grafana 5.2.2

Bug Fixes

  • Prometheus: Fix graph panel bar width issue in aligned Prometheus queries #12379
  • Dashboard: Dashboard links not updated when changing variables #12506
  • Postgres/MySQL/MSSQL: Fix connection leak #12636 #9827
  • Plugins: Fix loading of external plugins #12551
  • Dashboard: Remove unwanted scrollbars in embedded panels #12589
  • Prometheus: Prevent error using $__interval_ms in query #12533, thx @mtanda

See everything new in Grafana v5.2.2.

Download Grafana 5.2.2 Now


GrafanaCon LA
CFP Now Open!

Join us in Los Angeles, California February 25-26, 2019 for 2 days of talks focused on Grafana and the open source monitoring ecosystem.

Submit You CFP Today


From the Blogosphere

Video: David Kaltschmidt: Exploring your Prometheus Data in Grafana: Last week we shared David’s slides from PromCon 2018, but it’s so much better to actually see it action.

Installing Prometheus and Grafana on VMware Kubernetes Engine: This post details the process of deploying Prometheus as a monitoring framework for Kubernetes, along with Grafana as the visualization layer. Bahubali covers why monitoring K8s is different, building and preparing your cluster, and installing both Prometheus and Grafana.

Collecting DHCP Scope Data with Grafana: Eric wrote a Python script to help him collect aggregated data about groups of DHCP scopes and how his network users were changing. This lets him total up the number of free and used IPs in each range and visualize them on a graph in Grafana.

How I host this blog, CI and tooling: Vik is a budget conscious blogger and developer. In this article he provides a rundown of the infrastructure he uses for his blog and how he keeps it running for $8.53/month.

Graphite Grafana: Metrics Monitoring Made Easy: The first in a series on metrics monitoring made easy, this article gets you started with spinning up a Graphite/Grafana stack. Learn about the components of Graphite, Grafana, and how to get everything installed – the next article will dive into the actual monitoring.

System monitoring with Grafana, InfluxDB et Collectd: Learn about the components of a responsive dashboard system and how to easily deploy it with Docker.


We’re Hiring!

We’ve added new open positions at Grafana Labs! Do you love open source software? Do you thrive on tackling complex challenges to build the future? Want to work with awesome people? Be the next to join our team!

View our Open Positions


Upcoming Events

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

2018 Sensu Summit | Portland, OR – August 22-23, 2018:
Brian Gann: The Sensu Plugin for Grafana – Brian will be showing a demo of the new Sensu plugin for Grafana on August 22, and conducting a 30 minute Grafana tutorial on the 23rd!

We are a proud sponsor of this year’s Sensu Summit! Come enjoy Portland in the summer and learn a ton from the sharpest operations engineers in monitoring!

More Info

Meetup Workshop: Monitoring with Prometheus and Grafana | Belfast, Northern Ireland – September 18, 2018:

If you’re in Belfast, or are going to be in September, this could be a great Meetup to attend. Topics include: Architecture, Prometheus, Alertmanager, Pushgateway, Telegraf, JMX exporter, Grafana, and more!

RSVP Now

CloudNative London 2018 | London, United Kingdom – September 26-28, 2018:

Tom Wilkie: Monitoring Kubernetes With Prometheus – In this talk Tom will explore all the moving part for a working Prometheus-on-Kubernetes monitoring system, including kube-state-metrics, node-exporter, cAdvisor and Grafana. You will learn about the various methods for getting to a working setup: the manual approach, using CoreOS’s Prometheus Operator, or using Prometheus Ksonnet Mixin.

Tom will also share some little tips and tricks for getting the most out of your Prometheus monitoring, including the common pitfalls and what you should be alerting on.

Register Now

All Things Open 2018 | Raleigh, NC – October 21-23, 2018:

Tom Wilkie: The RED Method – How to Instrument your Services – The RED Method defines three key metrics you should measure for every microservice in your architecture; inspired by the USE Method from Brendan Gregg, it gives developers a template for instrumenting their services and building dashboards in a consistent, repeatable fashion.

In this talk we will discuss patterns of application instrumentation, where and when they are applicable, and how they can be implemented with Prometheus. We’ll cover Google’s Four Golden Signals, the RED Method, the USE Method, and Dye Testing. We’ll also discuss why consistency is an important approach for reducing cognitive load. Finally we’ll talk about the limitations of these approaches and what can be done to overcome them.

Register Now

OSMC 2018 | Nuremberg, Germany – November 5-8, 2018:

David Kaltschmidt: Logging is Coming to Grafana – Grafana is an OSS dashboarding platform with a focus on visualizing time series data as beautiful graphs. Now we’re adding support to show your logs inside Grafana as well. Adding support for log aggregation makes Grafana an even better tool for incident response: First, the metric graphs help in a visual zoning in on the issue. Then you can seamlessly switch over to view and search related log files, allowing you to better understand what your software was doing while the issue was occurring. The main part of this talk shows how to deploy the necessary parts for this integrated experience. In addition I’ll show the latest features of Grafana both for creating dashboards and maintaining their configuration. The last 10-15 will be reserved for a Q&A.

Register Now


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard or monitoring related tweet and show it off! #monitoringLove

I love a good heatmap. Let us know if you figure out where that super high latency is coming from.


How are we doing?

Hope you enjoyed this issue of TimeShift. What do you think? Are there other types of content you’d like to see here? Submit a comment on this issue below, or post something at our community forum.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Creating an AI-powered marketing solution for sentiment analysis and engagement

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/creating-an-ai-powered-marketing-solution-for-sentiment-analysis-and-engagement/

Note: Matt Dombrowski, one of our amazing Solutions Architects, wrote this article. He also developed the sample code that you can use to implement this solution.


Marketers know that it’s critical to understand the conversations that customers are having about their brands. The holy grail isn’t just to understand what’s happening on social media, but to distill those conversations into actionable insights. After that, you can scale, automate, and continuously improve your brand’s ability to engage.

In this blog post, we’ll demonstrate how your marketing department can use machine learning to understand social user sentiment and engage with users.

You’ll assume the role of a Marketing Manager at an up-and-coming retail company called Mountain Manhattan. Mountain Manhattan has seen strong growth in recent years, and is now looking for a better way to engage with its Twitter followers. Specifically, Mountain Manhattan wants to know who its advocates and distracters are, what the overall sentiment of the brand is, and who the key influencers are that need the white-glove treatment.

After that, we’ll show you how to quickly deploy a solution for real-time social media sentiment analysis and engagement. This process consists of three basic steps. First, you collect tweets that refer to your brand’s Twitter handle. Next, you use machine learning to assign a score to those tweets. And finally, you use Amazon Pinpoint to engage with your customers based on those scores.

Mountain Manhattan’s challenges

Like many companies, Mountain Manhattan has more data than they can act on. Mountain Manhattan receives over 1,000 tweets a day. That’s more than 365,000 tweets per year! Like most companies with a social media strategy, Mountain Manhattan thinks of each of these tweets as an ‘opportunity to engage.’ One of Mountain Manhattan’s challenges is that they need the tone, voice, response time, and candor of their responses to be clear and consistent—and they need to do so in several different languages.

They tried the brute force approach of reading tweets and manually responding to each one. However, this process quickly became unsustainable (not to mention very expensive) because of limited time and resources. Also, while Mountain Manhattan’s marketing team is rather tech-savvy, they don’t have the time or experience to worry about technical issues like ongoing development or security. Mountain Manhattan needs an engagement solution that’s affordable and effective, and that has industry-leading reliability, scale, and security.

The solution

Mountain Manhattan decided to use several AWS services to create an integrated social media monitoring and customer engagement solution. The marketing team spent about 30 minutes setting up the sophisticated solution described below, which enables testing and iteration on multiple use cases before going live.

This solution monitors a Twitter feed, and sends relevant tweets to an Amazon Kinesis data stream. Then it uses an AWS Lambda function to take the appropriate action. In this case, that action involves first calling Amazon Comprehend to provide a sentiment score, and then using Amazon Pinpoint to engage with the Twitter user. This solution has several benefits for Mountain Manhattan:

  • It’s scalable. Mountain Manhattan has flash and holiday sales, targeted campaigns, and various ad campaigns that can lead to spikes in customer tweets. This solution can handle nearly any workload in real time. Furthermore, ingesting every single tweet about their brand helps Mountain Manhattan get a holistic view of customer sentiment.
  • It’s easy to use. Mountain Manhattan needs to adapt to their customer needs. This means that they need a solution that’s customizable, user-friendly, and intuitive to use. By using Amazon Pinpoint, Mountain Manhattan’s marketing team was able to set up recurring campaigns based on certain customer characteristics. The daily, automated campaigns send notifications to an ever-updating dynamic segment. This ensures that customers never receive the same campaign message twice.
  • It’s cost-effective. Priorities for Mountain Manhattan can change quickly, and long-term contracts are no longer appealing to management. By using AWS services, there are no subscription fees, upfront costs, or long-term commitments. Mountain Manhattan pays only for they use, and they can adjust their marketing spend at any time.
  • It lets you own your data. Data is the lifeblood of modern marketing organizations. Companies need to own their data for use across many applications and systems. This solution gives Mountain Manhattan that ownership and flexibility. If it ever becomes necessary, they can change the destination of their Kinesis data streams to nearly any destination, and can export their customer data from Amazon Pinpoint.

How the solution works

The following architecture diagram shows the various AWS services that enable this AI-powered social sentiment marketing solution.

An image that shows the relationship between the various components used in this solution.

Let’s take a closer look at each of these components. This solution uses the following services and solutions:

  • Mobile client: Mountain Manhattan’s mobile app uses the Twitter SDK to authenticate users. The app is implemented in React Native for cross-platform compatibility, and because Mountain Manhattan’s developers are more familiar with JavaScript. Integrating the Twitter SDK enables Mountain Manhattan to map customers’ Twitter handles to specific mobile devices. There are a variety of ways to authenticate users—including authentication services from Facebook, Google, and Amazon. In this example, we focus on Twitter.
  • Amazon Kinesis Data Streams: This AWS service transfers tweets from Twitter into AWS Lambda (for sentiment analysis) and Amazon S3 (for long-term archival). Kinesis Data Streams can capture and store terabytes of data per hour from hundreds of thousands of sources. In the future, Mountain Manhattan could expand this solution to analyze data from Facebook, point-of-sale terminals, and website click streams.
  • Amazon ElasticSearch: Kinesis Firehose streams the tweets into an ElasticSearch cluster. By using ElasticSearch, Mountain Manhattan can easily search the data, and can visualize it by using Kibana.
  • AWS Lambda and Amazon Comprehend: Mountain Manhattan uses AWS Lambda to execute code without having to worry about deploying and maintaining servers. The AWS Lambda function looks at the tweets as they come in and determines the appropriate action to take. In Mountain Manhattan’s case, if the customer who tweeted is known, it calls Amazon Comprehend to perform AI-based sentiment analysis. Based on the results of that sentiment analysis, the Lambda function calls Amazon Pinpoint to begin the customer engagement process.
  • Amazon Pinpoint: This solution uses Amazon Pinpoint to handle two essential functions. First, it captures information about endpoints (the unique devices that use the app). Second, it sends targeted campaigns to those endpoints. The AWS Mobile SDK, which is integrated into Mountain Manhattan’s app, automatically associates the customer’s Twitter handle with their endpoint ID in Amazon Pinpoint. Mountain Manhattan also collects some custom attributes for each endpoint. For example, they place each endpoint into one of the following categories: Influencers, Supporters, Detractors, Loyal Shoppers, and CS Support Needed. By categorizing customers in this way, Mountain Manhattan can create more personalized messaging.

Mountain Manhattan’s solution in action

As Mountain Manhattan starts to ingest tweets, they get to know a lot more about their customers than just what they said. Mountain Manhattan can see the number of followers a user has, which is a good way to identify influencers. Additionally, Mountain Manhattan can see the Twitter user’s description, logo, picture, and location (if the user has exposed it). All of this data is fed into the AWS Lambda function, where Mountain Manhattan can take the customized action.

The screenshots in the following sections show the kinds of push notifications that Mountain Manhattan could automatically send to customers based on the content of their tweets.

Identifying influencers and early adopters

Mountain Manhattan’s AWS Lambda function determines how many Twitter followers each user has. If the number of followers is above a certain threshold, the function attaches the Influencer custom attribute to the user’s endpoint, and sends them a push notification.A push notification that says "We like you too! Tap here to join our Influencers Club and get free stuff!"

Tracking and engaging with consumers during events or promotions

During events, Mountain Manhattan can join the conversation with their customers by sending messages in real time based on customers’ tweets.A push notification that says "We're glad you're enjoying the sale! Tap here to subscribe to our events calendar."

Proactively engaging with customers having support issues

When Mountain Manhattan determines that the sentiment of a tweet is negative, they can send custom push notifications in an attempt to resolve the issue.A push notification that says "Sorry to hear you're having trouble! :( Tap here to talk to Jake, one of our best support team members."

Offering discounts or concessions to unhappy customers

Mountain Manhattan could monitor for certain words or phrases, such as “shipping delays”. When they detect these keywords, Amazon Pinpoint can automatically send a push notification that offers a discount on a future purchase.A push notification that says "We agree--delays are annoying! Tap here to get 20% off your next order."

Deploying the solution

Now that we’ve seen how this solution works, it’s time to implement it. The coolest part? You can use an AWS CloudFormation template to deploy all of the AWS components of this solution in a few clicks and about 10 minutes of your time.

Note: The procedures for deploying this solution might change over time as we continue to make improvements to it. For the latest procedures, see the Github page for this solution at https://github.com/aws-samples/amazon-pinpoint-social-sentiment/.

Prerequisites

To complete these procedures, you have to have the following:

  • A mobile app that uses Twitter’s APIs or SDK for authentication and for ingesting tweets.
  • A macOS-based computer and physical iOS device (the Simulator that’s included with Xcode isn’t sufficient for testing this solution).
  • Xcode, Node.js, npm, and CocoaPods installed on your macOS-based computer.
    • To download Xcode, go to https://developer.apple.com/download/.
    • To download Node.js and npm, go to https://nodejs.org/en/. Download the latest Long-Term Support (LTS) version for macOS.
    • To download and install CocoaPods, type the following command at the macOS command line: sudo gem install cocoapods
  • The AWS Command Line Interface (AWS CLI) installed and configured on your macOS-based computer. For information about installing the AWS CLI, see Installing the AWS Command Line Interface. For information about setting up the AWS CLI, see Configuring the AWS CLI.
  • An AWS account with sufficient permissions to create the resources shown in the architecture diagram in the earlier section. For more information about creating an AWS account, see How do I create and activate a new Amazon Web Services account.
  • An Amazon EC2 key pair. You need this to log in to the EC2 instance if you want to modify the Twitter handle that you’re monitoring. For more information, see Creating a Key Pair Using Amazon EC2.
  • An Apple Developer account. Note that the approach that we cover in this post focuses exclusively on iOS devices. You can implement this solution on Android devices

Part 1: Create a Twitter application

The first step in this process is to create a Twitter app, which gives you access to the Twitter API. This solution uses the Twitter API to collect tweets in real time.

To create a Twitter application:

  1. Log in to your Twitter account. If you don’t already have a Twitter account, create one at https://twitter.com/signup.
  2. Go to https://apps.twitter.com/app/new, and then choose Create a new application.
  3. Under Application Details, complete the following sections:
    • For Name, type the name of your app.
    • For Description, type a description of your app.
    • For the Website and Callback URL fields, type any fully qualified URL (such as https://www.example.com). You’ll change these values in a later step, so the values you enter at this point aren’t important.
  4. Choose Create your Twitter application.
  5. Under Your access token, choose Create your access token.
  6. Under Application type, choose Read Only.
  7. Under Oauth settings, note the values next to Consumer key and Consumer secret. Then, under Your access token, note the values next to Access key and Secret access key. You’ll need all of these values in later steps.

Part 2: Install the dependencies

This solution requires you to download and set up some files from a GitHub repository.

To configure the AWS Mobile SDK in your app:

  1. Open Terminal.app. On the command line, navigate to the directory where you want to create your project.
  2. On the command line, type the following command to clone the repository that contains the source code that you’re using to configure this solution: git clone https://github.com/aws-samples/amazon-pinpoint-social-sentiment/
  3. Type the following command to change to the directory that contains the installation files: cd amazon-pinpoint-social-sentiment/mobile
  4. Type the following command to download the dependencies for this solution: npm install
  5. Type the following command to link the dependencies in the project: react-native link
  6. Type the following command to change into the ios directory: cd ios
  7. Type the following command to install CocoaPods into your project: pod install

Part 3: Set up your app to use the AWS Mobile SDK

To configure your app:

  1. From the /mobile directory, type the following command to create a backend project for your app and pull the service configuration (aws-exports.js file) into your project: awsmobile init. Press Enter at each prompt to accept the default response, as shown in the following example.
    Please tell us about your project:
    ? Where is your project's source directory: /
    ? Where is your project's distribution directory that stores build artifacts: /
    ? What is your project's build command: npm run-script build
    ? What is your project's start command for local test run: npm run-script start
    
    ? What awsmobile project name would you like to use: mobile-2018-08-16-03-16-39

  2. Open the file aws-exports.js. This file contains information about the backend configuration of your AWS Mobile Hub project. Take note of the aws_mobile_analytics_app_id key—you’ll use this value in a later step.
  3. In a text editor, open the file pinpoint-social-sentiment/mobile/App.js. Under TwitterAuth.init, next to twitter_key, replace <your key here> with the consumer key that you received when you created your Twitter app in step 1. Then, next to twitter_secret, replace <your secret here> with the consumer secret you received when you created your Twitter app. When you finish, save the file.
  4. In a text editor, open the file amazon-pinpoint-social-sentiment/mobile/ios/MobileCon/AppDelegate.m. Search for the following section:
    - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
    {
      NSURL *jsCodeLocation;
    
      [[Twitter sharedInstance] startWithConsumerKey:@"<your-consumer-key>" consumerSecret:@"<your-consumer-secret>"];

    In this section, replace <your-consumer-key> with your Twitter consumer key, and replace <your-consumer-secret> with your Twitter consumer secret.

  5. In a text editor, open the file amazon-pinpoint-social-sentiment/mobile/ios/MobileCon/Info.plist. Search for the following section:
    <key>CFBundleURLTypes</key>
        <array>
            <dict>
                <key>CFBundleURLSchemes</key>
                <array>
                    <string>twitterkit-<your-API-key></string>
                </array>
            </dict>
        </array>
        ...

    Replace <your-API-key> with your Twitter consumer secret.

Part 4: Set up push notifications in your app

Now you’re ready to set up your app to send push notifications. A recent Medium post from Nader Dabit, one of our Developer Advocates, outlines this process nicely. Start at the Apple Developer Configuration section, and complete the remaining steps. After you complete these steps, your app is ready to send push notifications.

Part 5: Launch the AWS CloudFormation template

While your app is building, you can launch the AWS CloudFormation template that sets up the backend components that power this solution.

  1. Sign in to the AWS Management Console, and then open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation/home.
  2. Use the region selector to select the US East (N. Virginia) region.
  3. Choose Create new stack.
  4. Next to Choose a template, choose Specify an Amazon S3 template URL, and then paste the following URL: https://s3.amazonaws.com/mattd-customer-share/twitterdemo.template.yaml. Choose Next.
  5. Under Specify Details, for Stack Name, type a name for the CloudFormation stack.
  6. Under Parameters, do the following:
    1. For AccessToken, type your Twitter access token.
    2. For SecretAccessToken, type your Twitter access token secret.
    3. For AppId, type the app ID that you obtained in Part 3.
    4. For ConsumerKey, type your Twitter consumer key.
    5. For ConsumerSecret, type your Twitter consumer secret.
  7. Choose Next.
  8. On the next page, review your settings, and then choose Next again. On the final page, select the box to indicate that you understand that AWS CloudFormation will create IAM resources, and then choose Create.

When you choose Create, AWS CloudFormation creates the all of the backend components for the application. These include an EC2 instance, networking infrastructure, a Kinesis data stream, a Kinesis Firehose delivery stream, an S3 bucket, an Elasticsearch cluster, and a Lambda function. This process takes about 10 minutes to complete.

Part 6: Send a test tweet

Now you’re ready to test the solution to make sure that all of the components work as expected.

Start by logging in to your Twitter account. Send a tweet to @awsformobile. Your tweet should contain language that has a positive sentiment.

Your EC2 instance, which monitors the Twitter streaming API, captures this tweet. When this happens, the EC2 instance uses the Kinesis data stream to send the tweet to an Amazon S3 bucket for long-term storage. It also sends the tweet to AWS Lambda, which uses Amazon Comprehend to assign a sentiment score to the tweet. If the message is positive, Amazon Pinpoint sends a push notification to the Twitter handle that sent the message.

You can monitor the execution of the Lambda function by using Amazon CloudWatch Logs. You can access the CloudWatch Logs console at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs. The log should contain an entry that resembles the following example:

On the Amazon Elasticsearch Service (Amazon ES) console, you can watch as Amazon ES catalogs incoming tweets. You can access this console at https://console.aws.amazon.com/es/home?region=us-east-1. For the Amazon ES domain for the tweets, choose the Kibana URL. You can use Kibana to easily search your incoming tweets, as shown in the following image:

Finally, you can go to your Amazon S3 bucket to view an archive of the tweets that were addressed to you. This bucket is useful for simple archiving, additional analysis, visualization, or even machine learning. You can access the Amazon S3 console at https://s3.console.aws.amazon.com/s3/home?region=us-east-1#.

Part 7: Create an Amazon Pinpoint campaign

In the real world, you probably don’t want to send messages to users immediately after they send tweets to your Twitter handle. If you did, you might seem too aggressive, and your customers might hesitate to engage with your brand in the future.

Fortunately, you can use the campaign scheduling tools in Amazon Pinpoint to create a recurring campaign.

  1. Sign in to the AWS Management Console, and then open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/home/?region=us-east-1.
  2. On the Projects page, choose your app.
  3. In the navigation pane, choose Campaigns, and then choose New Campaign.
  4. For Campaign name, type a name for the campaign, and then choose Next step.
  5. On the Segment page, do the following
    1. Choose Create a new segment.
    2. For Name your segment to reuse it later, type a name for the segment.
    3. For Filter by user attributes, choose the plus sign (+) icon. Filter by segment to include all endpoints where Sentiment is Positive, as shown in the following image:A screenshot that shows how to add the Sentiment = Positive attribute to a Pinpoint segment.
    4. Choose Next step.
  6. On the Message page, type the message that you want to send, and then choose Next step. To learn more about writing mobile push messages, see Writing a Mobile Push Message in the Amazon Pinpoint User Guide.
  7. On the Schedule page, choose the date and time when the message will be sent. You can also schedule the campaign to run on a recurring basis, such as every week. To learn more about scheduling campaigns, see Set the Campaign Schedule in the Amazon Pinpoint User Guide.

Final thoughts

In this blog post, we demonstrated how your marketing department can use machine learning to understand social user sentiment and engage with users.

In the interest of transparency, we calculated the total costs associated with running this solution. Our calculation includes a small Elasticsearch cluster, a small EC2 instance, storage costs, compute costs, and messaging costs. Assuming your app has 1 million monthly active users (MAUs), and assuming that 0.5% of those MAUs mention your brand every month on Twitter, running this solution would cost $28.66 per month, or just under four cents an hour.

We think this solution is one of the most affordable and capable social media sentiment analysis tools you’ll find on the market today. The best part about this solution is that it can be a complete solution—or the starting point for your own customized solution.

Need to send push messages on other platforms, such as Firebase Cloud Messaging (FCM) (for most Android devices)? No problem. Just set up your app to send endpoint data to Amazon Pinpoint and to send push notifications, and you’re ready to go! Want to send messages through different channels? If you have other endpoint data for your customers (such as email addresses or mobile phone numbers), you can add channels to your project in Amazon Pinpoint, and then use those channels to send messages.

We’re very excited about this solution, and we can’t wait to see what you build with it!

The Problems and Promise of WebAssembly (Project Zero)

Post Syndicated from jake original https://lwn.net/Articles/762856/rss

Over at Google’s Project Zero blog, Natalie Silvanovich looks at some of the bugs the project has found in WebAssembly, which is a binary format to run code in the browser for web applications. She also looks to the future: “There are two emerging features of WebAssembly that are likely to have a security impact. One is threading. Currently, WebAssembly only supports concurrency via JavaScript workers, but this is likely to change. Since JavaScript is designed assuming that this is the only concurrency model, WebAssembly threading has the potential to require a lot of code to be thread safe that did not previously need to be, and this could lead to security problems.

WebAssembly GC [garbage collection] is another potential feature of WebAssembly that could lead to security problems. Currently, some uses of WebAssembly have performance problems due to the lack of higher-level memory management in WebAssembly. For example, it is difficult to implement a performant Java Virtual Machine in WebAssembly. If WebAssembly GC is implemented, it will increase the number of applications that WebAssembly can be used for, but it will also make it more likely that vulnerabilities related to memory management will occur in both WebAssembly engines and applications written in WebAssembly.”

Debian: 25 years and counting

Post Syndicated from jake original https://lwn.net/Articles/762854/rss

The Debian project is celebrating the 25th anniversary of its founding by Ian Murdock on August 16, 1993. The “Bits from Debian” blog had this to say: “Today, the Debian project is a large and thriving organization with countless self-organized teams comprised of volunteers. While it often looks chaotic from the outside, the project is sustained by its two main organizational documents: the Debian Social Contract, which provides a vision of improving society, and the Debian Free Software Guidelines, which provide an indication of what software is considered usable. They are supplemented by the project’s Constitution which lays down the project structure, and the Code of Conduct, which sets the tone for interactions within the project.

Every day over the last 25 years, people have sent bug reports and patches, uploaded packages, updated translations, created artwork, organized events about Debian, updated the website, taught others how to use Debian, and created hundreds of derivatives.” Happy birthday to the project from all of us here at LWN.

What To Do When You Get a B2 503 (or 500) Server Error

Post Syndicated from Brian Wilson original https://www.backblaze.com/blog/b2-503-500-server-error/

Backblaze logo
Just try again — it’s free, easy, and will work.

Seriously, that’s it. Occasionally, I’ll see questions that amount to, “I’m getting a 503 error; does that mean B2 is down?” To address that question, I wanted to take today’s post to go into a bit more detail on how to handle a 500 or 503 error. The short answer is no. B2 is not down. It simply means that B2 is functioning as designed as the most affordable, easy to use cloud storage service on the planet.

As we’ve described in our developer docs, the best decision is to write your integration in a way that it retries in the event of a 500 or 503. This modest amount of upfront work will result in a stable and transparent long term experience.

The Backblaze Contract Architecture

To understand the vast majority of B2 500 and 503 errors, it’s helpful to go into the “contract architecture” for B2. To create a service that is fully scalable at incredibly low cost, Backblaze has had to innovate in a number of areas. One way is what we refer to as “contract architecture.” It’s the approach that let us cut a large expense in traditional cloud storage infrastructure — high bandwidth load balancers for uploads.

Here’s how it works: when a client wants to push data to Backblaze, it contacts a “dispatching server.” That dispatching server figures out where there data will ultimately live inside a given Backblaze data center.

The dispatching server tells the client “there is space over on vault-9015.

Armed with that information (and an auth token), the client ends its connection with the dispatching server and creates a brand new request directly to vault-9015. The “contract” concept is not novel: ultimately, all APIs are contracts between two entities (machines). In the B2 case, our design leverages that insight as the client and vault negotiate how they will work together. In this example, once authenticated, the client continues to transmit to vault-9015 until it’s done or the vault fills up (or happens to go offline). In those instances, all the client has to do is return to the dispatching server to get information for the next available vault. This is a relatively trivial step and can be easily handled at the software level.

What Causes a B2 500 or 503 Error Response?

The client knows when to go back to the dispatching server because it receives (wait for it) a 500 or 503 error from vault-9015. The system is designed to send a firm message that says, in effect, “stop uploading to vault-9015.” We documented the specifics of what happens where in the B2 error handling protocols. The bottom line is an error in the 500 block should be interpreted by the client as the signal to GO BACK to the dispatching server and ask for a new vault for uploads. Rinse and repeat. It’s a free process that causes negligible incremental overhead.

What if, after getting a 503 and asking the dispatch server for a new URL, you try to upload and get ANOTHER 503 from the new vault? To address this unusual case, write your software to pause for a few seconds, then go back to the dispatch server. In this scenario, the user has hit a statistically unusual situation where the user was told to go to a vault with very little space left and somebody else got there and filled up that space. The second 503 is a sign the system is functioning as designed. Your program can elegantly handle it by going back to the dispatch server.

Other services, notably, Amazon S3, provide the client with a “well known URL.” The client can merrily push data to the URL and Amazon handles load balancing and finding open storage space after receiving the data. That’s a totally valid approach, but objectively more expensive as it involves high bandwidth load balancers. There are other interesting implications to the load balancing scenario. If you’re interested, I wrote a blog post on the difference between the two approaches.

As I discussed in that post, the contract architecture does introduce some complexity when the client has to go back to the dispatching server. But, for that modest amount of error handling upfront, we help fuel Backblaze B2 as an infinitely scalable, fully sustainable service that has and will continue to be the affordability leader in the object storage market.

The post What To Do When You Get a B2 503 (or 500) Server Error appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Managing Amazon SNS Subscription Attributes with AWS CloudFormation

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/managing-amazon-sns-subscription-attributes-with-aws-cloudformation/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS is a fully managed pub/sub messaging and event-driven computing service that can decouple distributed systems and microservices. By default, when your publisher system posts a message to an Amazon SNS topic, all systems subscribed to the topic receive a copy of the message. By using Amazon SNS subscription attributes, you can customize this default behavior and make Amazon SNS fit your use cases even more naturally. The available set of Amazon SNS subscription attributes includes FilterPolicy, DeliveryPolicy, and RawMessageDelivery.

You can manually manage your Amazon SNS subscription attributes via the AWS Management Console or programmatically via AWS Development Tools (SDK and AWS CLI). Now you can automate their provisioning via AWS CloudFormation templates as well. AWS CloudFormation lets you use a simple text file to model and provision all the Amazon SNS resources for your messaging use cases, across AWS Regions and accounts, in an automated and secure manner.

The following sections describe how you can simultaneously create Amazon SNS subscriptions and set their attributes via AWS CloudFormation templates.

Setting the FilterPolicy attribute

The FilterPolicy attribute is valid in the context of message filtering, regardless of the delivery protocol, and defines which type of message the subscriber expects to receive from the topic. Hence, by applying the FilterPolicy attribute, you can offload the message-filtering logic from subscribers and the message-routing logic from publishers.

To set the FilterPolicy attribute in your AWS CloudFormation template, use the syntax in the following JSON snippet. This snippet creates an Amazon SNS subscription whose endpoint is an AWS Lambda function. Simultaneously, this code also sets a subscription filter policy that matches messages carrying an attribute whose key is “pet” and value is either “dog” or “cat.”

{
   "Resources": {
      "mySubscription": {
         "Type" : "AWS::SNS::Subscription",
         "Properties" : {
            "Protocol": "lambda",
            "Endpoint": "arn:aws:lambda:us-east-1:000000000000:function:SavePet",
            "TopicArn": "arn:aws:sns:us-east-1:000000000000:PetTopic",
            "FilterPolicy": {
               "pet": ["dog", "cat"]
            }
         }
      }
   }
}

Setting the DeliveryPolicy attribute

The DeliveryPolicy attribute is valid in the context of message delivery to HTTP endpoints and defines a delivery-retry policy. By applying the DeliveryPolicy attribute, you can control the maximum number of retries the subscriber expects, the time delay between each retry, and the backoff function. You should fine-tune these values based on the traffic volume your subscribing HTTP server can handle.

To set the DeliveryPolicy attribute in your AWS CloudFormation template, use the syntax in the following JSON snippet. This snippet creates an Amazon SNS subscription whose endpoint is an HTTP address. The code also sets a delivery policy capped at 10 retries for this subscription, with a linear backoff function.

{
   "Resources": {
      "mySubscription": {
         "Type" : "AWS::SNS::Subscription",
         "Properties" : {
            "Protocol": "https",
            "Endpoint": "https://api.myendpoint.ca/pets",
            "TopicArn": "arn:aws:sns:us-east-1:000000000000:PetTopic",
            "DeliveryPolicy": {
               "healthyRetryPolicy": {
                  "numRetries": 10,
                  "minDelayTarget": 10,
                  "maxDelayTarget": 30,
                  "numMinDelayRetries": 3,
                  "numMaxDelayRetries": 7,
                  "numNoDelayRetries": 0,
                  "backoffFunction": "linear"
               }
            }
         }
      }
   }
}

Setting the RawMessageDelivery attribute

The RawMessageDelivery attribute is valid in the context of message delivery to Amazon SQS queues and HTTP endpoints. This Boolean attribute eliminates the need for the subscriber to process the JSON formatting that is created by default to decorate all published messages with Amazon SNS metadata. When you set RawMessageDelivery to true, you get two outcomes. First, your message is delivered as is, with no metadata added. Second, your message attributes propagate from Amazon SNS to Amazon SQS, when the subscribing endpoint is an Amazon SQS queue.

To set the RawMessageDelivery attribute in your AWS CloudFormation template, use the syntax in the following JSON snippet. This snippet creates an Amazon SNS subscription whose endpoint is an Amazon SQS queue. This code also enables raw message delivery for the subscription, which prevents Amazon SNS metadata from being added to the message payload.

{
   "Resources": {
      "mySubscription": {
         "Type" : "AWS::SNS::Subscription",
         "Properties" : {
            "Protocol": "https",
            "Endpoint": "https://api.myendpoint.ca/pets",
            "TopicArn": "arn:aws:sns:us-east-1:000000000000:PetTopic",
            "DeliveryPolicy": {
               "healthyRetryPolicy": {
                  "numRetries": 10,
                  "minDelayTarget": 10,
                  "maxDelayTarget": 30,
                  "numMinDelayRetries": 3,
                  "numMaxDelayRetries": 7,
                  "numNoDelayRetries": 0,
                  "backoffFunction": "linear"
               }
            }
         }
      }
   }
}

Applying subscription attributes in a use case

Here’s how everything comes together. The following example is based on a car dealer company, which operates with the following distributed systems hosted on Amazon EC2 instances:

  • Car-Dealer-System – Front-office system that takes orders placed by car buyers
  • ERP-System – Enterprise resource planning, the back-office system that handles finance, accounting, human resources, and related business activities
  • CRM-System – Customer relationship management, the back-office system responsible for storing car buyers’ profile information and running sales workflows
  • SCM-System – Supply chain management, the back-office system that handles inventory tracking and demand forecast and planning

 

Whenever an order is placed in the car dealer system, this event is broadcasted to all back-office systems interested in this type of event. As shown in the preceding diagram, the company applied AWS Messaging services to decouple their distributed systems, promoting more scalability and maintainability for their architecture. The queues and topic used are the following:

  • Car-Sales – Amazon SNS topic that receives messages from the car dealer system. All orders placed by car buyers are published to this topic, then delivered to subscribers (two Amazon SQS queues and one HTTP endpoint).
  • ERP-Integration – Amazon SQS queue that feeds the ERP system with orders published by the car dealer system. The ERP pulls messages from this queue to track revenue and trigger related bookkeeping processes.
  • CRM-Integration – Amazon SQS queue that feeds the CRM system with orders published by the car dealer system. The CRM pulls messages from this queue to track car buyers’ interests and update sales workflows.

The company created the following three Amazon SNS subscriptions:

  • The first subscription refers to the ERP-Integration queue. This subscription has the RawMessageDelivery attribute set to true. Hence, no metadata is added to the message payload, and message attributes are propagated from Amazon SNS to Amazon SQS.
  • The second subscription refers to the CRM-Integration queue. Like the first subscription, this one also has the RawMessageDelivery attribute set to true. Additionally, it has the FilterPolicy attribute set to {“buyer-class”: [“vip”]}. This policy defines that only orders placed by VIP buyers are managed in the CRM system, and orders from other buyers are filtered out.
  • The third subscription points to the HTTP endpoint that serves the SCM-System. Unlike ERP and CRM, the SCM system provides its own HTTP API. Therefore, its HTTP endpoint was subscribed to the topic directly without a queue in between. This subscription has a DeliveryPolicy that caps the number of retries to 20, with exponential back-off function.

The company didn’t want to create all these resources manually, though. They wanted to turn this infrastructure into versionable code, and the ability to quickly spin up and tear down this infrastructure in an automated manner. Therefore, they created an AWS CloudFormation template to manage these AWS messaging resources: Amazon SNS topic, Amazon SNS subscriptions, Amazon SNS subscription attributes, and Amazon SQS queues.

Executing the AWS CloudFormation template

Now you’re ready to execute this AWS CloudFormation template yourself. To bootstrap this architecture in your AWS account:

    1. Download the sample AWS CloudFormation template from the repository.
    2. Go to the AWS CloudFormation console.
    3. Choose Create Stack.
    4. For Select Template, choose to upload a template to Amazon S3, and choose Browse.
    5. Select the template you downloaded and choose Next.
    6. For Specify Details:
      • Enter the following stack name: Car-Dealer-Stack.
      • Enter the HTTP endpoint to be subscribed to your topic. If you don’t have an HTTP endpoint, create a temp one.
      • Choose Next.
    7. For Options, choose Next.
    8. For Review, choose Create.
    9. Wait until your stack creation process is complete.

Now that all the infrastructure is in place, verify the Amazon SNS subscriptions attributes set by the AWS CloudFormation template as follows:

  1. Go to the Amazon SNS console.
  2. Choose Topics and then select the ARN associated with Car-Sales.
  3. Verify the first subscription:
    • Select the subscription related to ERP-Integration (Amazon SQS protocol).
    • Choose Other subscription actions and then choose Edit subscription attributes.
    • Note that raw message delivery is enabled, and choose Cancel to go back.
  4. Verify the second subscription:
    • Select the subscription related to CRM-Integration (Amazon SQS protocol).
    • Choose Other subscription actions and then choose Edit subscription attributes.
    • Note that raw message delivery is enabled and then choose Cancel to go back.
    • Choose Other subscription actions and then choose Edit subscription filter policy.
    • Note that the filter policy is set, and then choose Cancel to go back
  5. Confirm the third subscription.
  6. Verify the third subscription:
    • Select the subscription related to SCM-System (HTTP protocol).
    • Choose Other subscription actions and then choose Edit subscription delivery policy.
    •  Choose Advanced view.
    • Note that an exponential delivery retry policy is set, and then choose Cancel to go back.

Now that you have verified all subscription attributes, you can delete your AWS CloudFormation stack as follows:

  1. Go to the AWS CloudFormation console.
  2. In the list of stacks, select Car-Dealer-Stack.
  3. Choose Actions, choose Delete Stack, and then choose Yes Delete.
  4. Wait for the stack deletion process to complete.

That’s it! At this point, you have deleted all Amazon SNS and Amazon SQS resources created in this exercise from your AWS account.

Summary

AWS CloudFormation templates enable the simultaneous creation of Amazon SNS subscriptions and their attributes (such as FilterPolicy, DeliveryPolicy, and RawMessageDelivery) in an automated and secure manner. AWS CloudFormation support for Amazon SNS subscription attributes is available now in all AWS Regions.

For information about pricing, see AWS CloudFormation Pricing. For more information on setting up Amazon SNS resources via AWS CloudFormation templates, see:

Amazon QuickSight now supports Email Reports and Data Labels

Post Syndicated from Jose Kunnackal original https://aws.amazon.com/blogs/big-data/amazon-quicksight-now-supports-email-reports-and-data-labels/

Today, we are excited to announce the availability of email reports and data labels in Amazon QuickSight.

Email reports

With email reports in Amazon QuickSight, you can receive scheduled and one-off reports delivered directly to your email inbox. Using email reports, you have access to the latest information without logging in to your Amazon QuickSight accounts. You also get offline access to your data with email reports. For deeper analysis and exploration, you can easily click through from the email report to the interactive dashboard in Amazon QuickSight.

Sending reports by using email

Authors can choose to send a one-time or scheduled email report to users who have access to the dashboard within an Amazon QuickSight account. You can personalize email reports for desktop or mobile layouts depending on the recipients’ preferences.

Enabling email reports for a dashboard is easy. Simply navigate to the Share menu on the dashboard page and choose the Email Report option. To send or edit the schedule of an email report on the dashboard, you must be an owner or co-owner of the dashboard.

On this screen, you can configure the email report, with options for scheduling, email details (for example, the subject line), and recipients.

After email reports are enabled for a dashboard, all users with access to the dashboard can subscribe or unsubscribe to the email reports. They can also modify layout preferences (mobile or desktop) by navigating to the dashboard page within their account. Authors also have the option to send test email reports to themselves to ensure that they have the right formatting and layout in place.

After email reports are scheduled, they’re sent at the frequency and time specified. The owner of an Amazon QuickSight dashboard can also pause the schedule of an email report or send a one-off report between scheduled sends. If there are any error or failures in the refreshes of the underlying SPICE data set associated with the dashboard, Amazon QuickSight automatically skips delivery of the report. Amazon QuickSight also sends an error report to the dashboard owner in this case.

Pricing: Email reports are available in Amazon QuickSight Enterprise Edition. For Amazon QuickSight authors, email reports are included in the monthly subscription charges; authors can receive unlimited email reports in a month. For Amazon QuickSight readers, charges for email reports follow the Pay-per-Session pricing model. Readers are billed $0.30 (one reader session) for each email report they receive, up to the monthly maximum charge of $5 per reader. Each $0.30 charge associated with an email report also provides readers with a future credit for an interactive, 30-minute Amazon QuickSight session within the calendar month. Charges for email reports and regular reader sessions both accrue to the $5 per month maximum charge for a reader.

To illustrate this, let’s consider some examples:

Cathy is an Amazon QuickSight reader and receives three email reports from her team on the first of every month. After she receives the reports, Cathy is charged $0.30 x 3 = $0.90, and she receives credits for three interactive Amazon QuickSight sessions in the month. Over the course of the month, Cathy accesses Amazon QuickSight 10 more times, viewing dashboards and analyzing data. At the end of the month, her total Amazon QuickSight charge is:

$0.90 (for 3 email reports) + 7 x $0.30 (10 reader sessions – 3 session credits from the email reports) = $3

As another example, if Chris is a heavy user of Amazon QuickSight, the reader maximum charge of $5 per month always applies. For example, suppose that he receives 3 email reports every week (12 email reports in a month). Suppose also that he accesses Amazon QuickSight once a day for every day of the month (30 sessions a month). His total charge for the month is only $5.

Availability: Email reports are currently available in Amazon QuickSight Enterprise Edition, in all supported AWS Regions—US East (N. Virginia and Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore, Sydney, and Tokyo). For more details, refer to the Amazon QuickSight documentation.

Data labels

With this release, we also introduce data labels in Amazon QuickSight. With data labels, viewers of a dashboard can quickly and easily find the numeric values associated with data points on a chart without having to hover over it. Data labels provide additional benefits in email reports because they display metrics on the static visuals in the emails. Without data labels, these metrics would be visible only if you hover over visuals on the dashboard.

To access the data labels, use the Format visual option when editing a visual.

With this option, you can specify the position of the data labels within the visual, font size, font color, and other visual-specific options.

Availability: Data labels are available in both Standard and Enterprise Editions, in all supported AWS Regions. For more details, refer to the documentation.

 


Additional Reading

If you found this post useful, be sure to check out 10 visualizations to try in Amazon QuickSight with sample data and Analyze Amazon Connect records with Amazon Athena, AWS Glue, and Amazon QuickSight.

 


About the Author

Jose Kunnackal is a senior product manager for Amazon QuickSight.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close