Tag Archives: MQTT

12 Questions you should ask your broker vendor about MQTT clustering

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/12-questions-you-should-ask-your-broker-vendor-about-mqtt-clustering/

HiveMQ rest-service

Not all MQTT broker clusters are created equal. While clusters are typically used to solve the single point of failure challenge in pub/sub systems, there are many pitfalls and edge cased that need to be addressed. These cases aren’t always obvious for companies that are evaluating MQTT solutions. We gathered the top 12 questions about MQTT clustering we saw from many of our customers evaluation processes. This allows you to check with your MQTT broker vendor if the guarantees that your project deserves are met by the MQTT broker of your choice.

Questions you should ask yourself

First, it’s very important to ask yourself the following questions to find out what’s important for your MQTT project.

  • Do you want the MQTT broker cluster to behave like a single MQTT broker from the clients perspective, regardless to which cluster node a client is connected?
  • Do you require the MQTT broker to preserve Quality of Service 1 and 2 guarantees in the cluster?
  • Should your cluster stay online and keep working even if networking problems (e.g. Network Splits) arise?
  • Should the cluster scale elastically by having the possibility to add and remove cluster nodes at runtime?
  • Do you plan to deploy the MQTT broker cluster in a cloud environment?
  • Should MQTT client sessions still be available, even if one or more cluster nodes crash?

If you answer at least one of the questions with “yes”, a resilient, reliable and scalable MQTT broker cluster is be what you’re looking for. This also means, that a replicated topic tree is probably not enough and you need a more holistic clustering approach that also covers QoS messages, client sessions, queued messages and retained messages. Read on to decide which questions may be important to ask to get the cluster characteristics you desire.

Questions you should ask your MQTT broker vendor

Marketing material doesn’t always answer all the questions, especially when it comes to edge cases and error cases. These 12 questions are the most popular questions we got asked from our customers in their evaluation process. Profit by also adding these questions to your broker evaluations:

The following paragraphs discuss the reasoning behind the questions and give background information that might be useful for understanding the implications of cluster characteristics to your MQTT deployment.

Is the cluster distributed or fully replicated?

Full replication is most often used in high availability scenarios that consist of two nodes and only one broker node serves all requests, the second node is the failover node. For use cases that need massive scalability, full replication simply doesn’t scale. The main reason for that is, that RAM, disk space and bandwidth (between cluster nodes) is finite. If full replication would be used in a consistent way, there are n-1 replicas, which means a 10 node cluster must replicate each data to 9 other nodes, a 20 node cluster to 19 other nodes and so on.

This situation is not as severe if only the topic tree is replicated (although this has the same problems discussed above), since message routing itself doesn’t necessarily need full replication. If the broker cluster would only replicate the topic tree, this has severe impact on other aspects of MQTT. We will discuss that in a minute.

Split Brain: In case of a network split, does the broker continue to operate properly?

No network is reliable; a reliable network as prerequisite for MQTT brokers is a warn sign that message and data loss is inevitable in the long term. When it comes to distributed systems, you can’t sacrifice partition tolerance. So it’s a key question for MQTT brokers how they behave in case of network splits.

It’s important that the cluster doesn’t lose its availability by refusing to accept new MQTT clients or even disconnecting MQTT clients, independent of the fact if the clients reside in a minority partition or not. So it’s also extremely important that the cluster can handle split brain scenarios. An example would be if 500.000 MQTT clients are connected to each MQTT broker node. If a split brain occurs and a single MQTT node would refuse to operate properly (e.g. due to a minority partition), half a million MQTT clients are forced to disconnect, which could result in a reconnection storm on all other cluster nodes.

What guarantees does the broker have when a network split occurs?

As discussed above, network splits are inevitable in cloud environments (and in on-premise networks), so the question is how the MQTT brokers deal with a network split or other cluster communication failures which could occur due to high latencies. There are essentially three ways to deal with these partitions:

  1. Don’t handle the network split and the whole system potentially loses availability and consistency.
  2. Trade consistency with eventual consistency and provide reconciliation mechanics.
  3. Trade availability and make at least the minority partition(s) unavailable, which means they are out of operation until the network split is resolved.

The CAP Theorem defines that we unfortunately can’t have all these 3 characteristics in a distributed system: Consistency, Availability, and Partition Tolerance. So the decision is really about consistency and availability and the questions you should ask yourself are:

  • Is there something in your use case that doesn’t allow Eventual Consistency? If yes, would you rather want to have parts of your broker deployment unavailable but have the guarantee of strong consistency?
  • Is uptime and availability important? Should the cluster be resilient enough to handle network partitions and still serve MQTT clients?

If your MQTT broker cluster does not replicate/distribute data other than the topic subscriptions, you are unlikely to hit edge cases where the CAP tradeoffs are important to consider. In such a case, you will face many other challenges. More on that in the next paragraphs (FIXME: LINK).

Are clients able to resume a persistent session on other cluster nodes?

MQTT broker clusters typically reside behind a hardware or software load balancer. The MQTT clients are typically not aware if they are communicating with a MQTT broker cluster or a single MQTT broker. If the MQTT clients use persistent sessions, the clients expect to continue the session on any broker node in case they got disconnected for any reason. This also means that the broker cluster needs to queue messages in case the client was offline.

If the MQTT broker does not support continuing a MQTT session on a different node, messages will be lost in the following cases:

  • The broker node that contains the information about the client session gets unavailable
  • The client connects do a different broker node and creates a new session

Allowing a MQTT client to continue it’s session on a different cluster node is one of the key features every MQTT broker should provide if you need a reliable messaging solution based on MQTT.

Does the broker give message order guarantees in the cluster?

The MQTT 3.1.1 specification defines Ordered Topics, which means the message ordering must be guaranteed for a single MQTT topic. Message ordering is a fundamental problem in distributed systems, though. So most MQTT broker clusters use one of the two following approaches for guaranteeing message ordering across cluster nodes:

  • Wall Clock Timestamps
  • Logical Timestamps

Wall Clock Timestamps require a time synchronization between all cluster nodes. This is often achieved by using NTP, which doesn’t solve the problem that POSIX timestamps are not monotonic, though. So if your broker uses wall clock time stamps for message ordering, you might get ordering issues, especially for queued messages.

Can you remove and add cluster nodes at runtime?

Many MQTT project start with a small amount of MQTT devices and scale up over time. So, depending on your project, you may want to add and remove cluster nodes at runtime to meet traffic spikes for MQTT connections or message throughput. In high availability scenarios broker hardware can be upgraded at runtime by removing, upgrading the server, and adding the node back to the broker cluster. Even use cases where the number of connections and message throughput is rather static, it might still be a useful feature if you are able to replace faulty hardware at any point of time by removing and adding broker nodes without sacrificing cluster uptime.

What consistency guarantees does the broker give when it acknowledges messages?

MQTT uses acknowledgement messages for responding to CONNECT, SUBSCRIBE and UNSUBSCRIBE messages. When a broker acknowledges these messages, it indicates that the request and the state modification on the broker was successful (or unsuccessful). The same principle is also desired for MQTT broker clusters.

Imagine a case where a MQTT broker node would acknowledge a SUBSCRIBE message to the client but did not notify all other cluster nodes of the new subscription before it sends out the acknowledgement to the client. This would mean that the MQTT client that just subscribed misses messages from other nodes for an arbitrary time (until all cluster nodes are aware of the new subscription). If you depend on messages delivered to your client, make sure the consistency guarantees in the cluster are the guarantees your use case desires.

Is client session data preserved if a node is unavailable?

MQTT sessions typically consist of the following information:

  • Session meta information
  • Queued Messages
  • Subscriptions
  • QoS 1 and 2 message flow state

If session information is not distributed or replicated across the cluster, you are going to lose state if a broker node crashes or is unavailable for any other reason. This means that in case of persistent sessions, you will in best case lose session information and subscriptions and in worst case you are going to lose queued messages or messages in the message flow – even if you declared them with QoS 1 or 2!

If your applications can’t tolerate to lose messages or persistent sessions, make sure your broker supports replication of these messages in the cluster.

Does the broker cluster survive without human interaction if one or more cluster node crash?

In worst case scenarios where one or more broker nodes crash, a cluster needs to transition to a stable state as quickly as possible in order to avoid cascading failures. If human interaction (e.g. reconfiguration) is required to regain a stable cluster state, this may result in long outages and complex failures. Sophisticated broker implementations don’t need any human interaction and have self-healing mechanisms implemented.

Are elastic node discovery methods supported?

Depending on the deployment environment, different broker discovery methods need to be supported. To form a broker cluster, all MQTT broker nodes need to have a mechanism to find themselves, a so called node discovery. The most popular cluster discovery methods that most brokers implement are

  • Static Discovery with pre-defined cluster nodes
  • UDP Multicast

For use cases that don’t require elastic scaling, static discovery might be well suited. For elastic deployments that need to scale over time, UDP multicast is a good fit. Unfortunately most IAAS providers like AWS don’t support multicast in their network. If you plan to deploy your MQTT broker to cloud environments, you might need to rely on different discovery methods. So check if the discovery methods your broker supports are the discovery methods your project needs.

Are rolling upgrades supported?

Downtimes must be avoided at all costs for most MQTT deployments. One of the advantages of sophisticated broker clusters is, that you can add and remove nodes at runtime. But just because you can add and remove nodes at runtime, this does not necessarily mean that you can also update the cluster while old versions are running inside the cluster. Even if you don’t plan to upgrade the MQTT broker versions often, make sure there is an upgrade path that doesn’t require bringing down the whole cluster, otherwise you can lose state and MQTT messages and all your MQTT clients will disconnect.

Are client takeovers supported?

MQTT client takeovers are an important MQTT feature for improving the resilience of your MQTT applications. In a nutshell, a client takeover means, that if a second client connects with the same client identifier, the MQTT broker will disconnect the already connected client. This is very important to mitigate the half open sockets problem for MQTT clients. But what happens if the first client is connected on one node and the second client connects to another node? In case the broker cluster would allow to join a second client with the same client identifier, inconsistent session state (that could result in message reordering and even message loss) would become a problem.

If you plan to utilize MQTT clients that are connected via mobile networks or any other kind of unreliable network, you may want your broker to support client takeovers.

Conclusion

Unexpected situations and maintenance happen in production. Some MQTT brokers have the ability to form broker clusters to solve the single point of failure problem for pub/sub architectures. Unfortunately not all cluster implementations are created equal and can result in negative surprises in error and edge cases for the customer. The questions we discussed in this blog post should be the minimum questions you should check in your MQTT evaluation to decide if a MQTT broker is the right choice for you. As always, there is no ultimate answer for every single deployment and so you need to decide on your own what is important for your MQTT scenario.

If you are interested in how HiveMQ solves the challenges we discussed in the blog post, get in touch!

What additional questions do you think we should add to this list? Let us know in the comments!

[$] An introduction to MQTT

Post Syndicated from corbet original https://lwn.net/Articles/753705/rss

I was sure that somewhere there must be
physically-lightweight sensors with simple power, simple networking, and
a lightweight protocol that allowed them to squirt their data down the
network with a minimum of overhead. So my interest was piqued when Jan-Piet Mens spoke at FLOSS
UK’s Spring
Conference
on “Small Things for Monitoring”. Once he started passing
working demonstration systems around the room without interrupting the
demonstration, it was clear that MQTT was what I’d been looking for.

What’s new in HiveMQ 3.4

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/whats-new-in-hivemq-3-4

We are pleased to announce the release of HiveMQ 3.4. This version of HiveMQ is the most resilient and advanced version of HiveMQ ever. The main focus in this release was directed towards addressing the needs for the most ambitious MQTT deployments in the world for maximum performance and resilience for millions of concurrent MQTT clients. Of course, deployments of all sizes can profit from the improvements in the latest and greatest HiveMQ.

This version is a drop-in replacement for HiveMQ 3.3 and of course supports rolling upgrades with zero-downtime.

HiveMQ 3.4 brings many features that your users, administrators and plugin developers are going to love. These are the highlights:

 

New HiveMQ 3.4 features at a glance

Cluster

HiveMQ 3.4 brings various improvements in terms of scalability, availability, resilience and observability for the cluster mechanism. Many of the new features remain under the hood, but several additions stand out:

Cluster Overload Protection

The new version has a first-of-its-kind Cluster Overload Protection. The whole cluster is able to spot MQTT clients that cause overload on nodes or the cluster as a whole and protects itself from the overload. This mechanism also protects the deployment from cascading failures due to slow or failing underlying hardware (as sometimes seen on cloud providers). This feature is enabled by default and you can learn more about the mechanism in our documentation.

Dynamic Replicates

HiveMQ’s sophisticated cluster mechanism is able to scale in a linear fashion due to extremely efficient and true data distribution mechanics based on a configured replication factor. The most important aspect of every cluster is availability, which is achieved by having eventual consistency functions in place for edge cases. The 3.4 version adds dynamic replicates to the cluster so even the most challenging edge cases involving network splits don’t lead to the sacrifice of consistency for the most important MQTT operations.

Node Stress Level Metrics

All MQTT cluster nodes are now aware of their own stress level and the stress levels of other cluster members. While all stress mitigation is handled internally by HiveMQ, experienced operators may want to monitor the individual node’s stress level (e.g with Grafana) in order to start investigating what caused the increase of load.

WebUI

Operators worldwide love the HiveMQ WebUI introduced with HiveMQ 3.3. We gathered all the fantastic feedback from our users and polished the WebUI, so it’s even more useful for day-to-day broker operations and remote debugging of MQTT clients. The most important changes and additions are:

Trace Recording Download

The unique Trace Recordings functionality is without doubt a lifesaver when the behavior of individual MQTT clients needs further investigation as all interactions with the broker can be traced — at runtime and at scale! Huge production deployments may accumulate multiple gigabytes of trace recordings. HiveMQ now offers a convenient way to collect all trace recordings from all nodes, zips them and allows the download via a simple button on the WebUI. Remote debugging was never easier!

Additional Client Detail Information in WebUI

The mission of the HiveMQ WebUI is to provide easy insights to the whole production MQTT cluster for operators and administrators. Individual MQTT client investigations are a piece of cake, as all available information about clients can be viewed in detail. We further added the ability to view the restrictions a concrete client has:

  • Maximum Inflight Queue Size
  • Client Offline Queue Messages Size
  • Client Offline Message Drop Strategy

Session Invalidation

MQTT persistent sessions are one of the outstanding features of the MQTT protocol specification. Sessions which do not expire but are never reused unnecessarily consume disk space and memory. Administrators can now invalidate individual session directly in the HiveMQ WebUI for client sessions, which can be deleted safely. HiveMQ 3.4 will take care and release the resources on all cluster nodes after a session was invalidated

Web UI Polishing

Most texts on the WebUI were revisited and are now clearer and crisper. The help texts also received a major overhaul and should now be more, well, helpful. In addition, many small improvements were added, which are most of the time invisible but are here to help when you need them most. For example, the WebUI now displays a warning if cluster nodes with old versions are in the cluster (which may happen if a rolling upgrade was not finished properly)

Plugin System

One of the most popular features of HiveMQ is the extensive Plugin System, which virtually enables the integration of HiveMQ to any system and allows hooking into all aspects of the MQTT lifecycle. We listened to the feedback and are pleased to announce many improvements, big and small, for the Plugin System:

Client Session Time-to-live for individual clients

HiveMQ 3.3 offered a global configuration for setting the Time-To-Live for MQTT sessions. With the advent of HiveMQ 3.4, users can now programmatically set Time-To-Live values for individual MQTT clients and can discard a MQTT session immediately.

Individual Inflight Queues

While the Inflight Queue configuration is typically sufficient in the HiveMQ default configuration, there are some use cases that require the adjustment of this configuration. It’s now possible to change the Inflight Queue size for individual clients via the Plugin System.
 
 

Plugin Service Overload Protection

The HiveMQ Plugin System is a power-user tool and it’s possible to do unbelievably useful modifications as well as putting major stress on the system as a whole if the programmer is not careful. In order to protect the HiveMQ instances from accidental overload, a Plugin Service Overload Protection can be configured. This rate limits the Plugin Service usage and gives feedback to the application programmer in case the rate limit is exceeded. This feature is disabled by default but we strongly recommend updating your plugins to profit from this feature.

Session Attribute Store putIfNewer

This is one of the small bits you almost never need but when you do, you’re ecstatic for being able to use it. The Session Attribute Store now offers methods to put values, if the values you want to put are newer or fresher than the values already written. This is extremely useful, if multiple cluster nodes want to write to the Session Attribute Store simultaneously, as this guarantees that outdated values can no longer overwrite newer values.
 
 
 
 

Disconnection Timestamp for OnDisconnectCallback

As the OnDisconnectCallback is executed asynchronously, the client might already be gone when the callback is executed. It’s now easy to obtain the exact timestamp when a MQTT client disconnected, even if the callback is executed later on. This feature might be very interesting for many plugin developers in conjunction with the Session Attribute Store putIfNewer functionality.

Operations

We ❤️ Operators and we strive to provide all the tools needed for operating and administrating a MQTT broker cluster at scale in any environment. A key strategy for successful operations of any system is monitoring. We added some interesting new metrics you might find useful.

System Metrics

In addition to JVM Metrics, HiveMQ now also gathers Operating System Metrics for Linux Systems. So HiveMQ is able to see for itself how the operating system views the process, including native memory, the real CPU usage, and open file usage. These metrics are particularly useful, if you don’t have a monitoring agent for Linux systems setup. All metrics can be found here.

Client Disconnection Metrics

The reality of many MQTT scenarios is that not all clients are able to disconnect gracefully by sending MQTT DISCONNECT messages. HiveMQ now also exposes metrics about clients that disconnected by closing the TCP connection instead of sending a DISCONNECT packet first. This is especially useful for monitoring, if you regularly deal with clients that don’t have a stable connection to the MQTT brokers.

 

JMX enabled by default

JMX, the Java Monitoring Extension, is now enabled by default. Many HiveMQ operators use Application Performance Monitoring tools, which are able to hook into the metrics via JMX or use plain JMX for on-the-fly debugging. While we recommend to use official off-the-shelf plugins for monitoring, it’s now easier than ever to just use JMX if other solutions are not available to you.

Other notable improvements

The 3.4 release of HiveMQ is full of hidden gems and improvements. While it would be too much to highlight all small improvements, these notable changes stand out and contribute to the best HiveMQ release ever.

Topic Level Distribution Configuration

Our recommendation for all huge deployments with millions of devices is: Start with separate topic prefixes by bringing the dynamic topic parts directly to the beginning. The reality is that many customers have topics that are constructed like the following: “devices/{deviceId}/status”. So what happens is that all topics in this example start with a common prefix, “devices”, which is the first topic level. Unfortunately the first topic level doesn’t include a dynamic topic part. In order to guarantee the best scalability of the cluster and the best performance of the topic tree, customers can now configure how many topic levels are used for distribution. In the example outlined here, a topic level distribution of 2 would be perfect and guarantees the best scalability.

Mass disconnect performance improvements

Mass disconnections of MQTT clients can happen. This might be the case when e.g. a load balancer in front of the MQTT broker cluster drops the connections or if a mobile carrier experiences connectivity problems. Prior to HiveMQ 3.4, mass disconnect events caused stress on the cluster. Mass disconnect events are now massively optimized and even tens of millions of connection losses at the same time won’t bring the cluster into stress situations.

 
 
 
 
 
 

Replication Performance Improvements

Due to the distributed nature of a HiveMQ, data needs to be replicated across the cluster in certain events, e.g. when cluster topology changes occur. There are various internal improvements in HiveMQ version 3.4, which increase the replication performance significantly. Our engineers put special love into the replication of Queued Messages, which is now faster than ever, even for multiple millions of Queued Messages that need to be transferred across the cluster.

Updated Native SSL Libraries

The Native SSL Integration of HiveMQ was updated to the newest BoringSSL version. This results in better performance and increased security. In case you’re using SSL and you are not yet using the native SSL integration, we strongly recommend to give it a try, more than 40% performance improvement can be observed for most deployments.

 
 

Improvements for Java 9

While Java 9 was already supported for older HiveMQ versions, HiveMQ 3.4 has full-blown Java 9 support. The minimum Java version still remains Java 7, although we strongly recommend to use Java 8 or newer for the best performance of HiveMQ.

IoT Inspector Tool from Princeton

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/iot_inspector_t.html

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They’ve already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers’ servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.
  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest’s website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that “Many IoT devices lack basic encryption and authentication” and that “User behavior can be inferred from encrypted IoT device traffic.” No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

AWS AppSync – Production-Ready with Six New Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-appsync-production-ready-with-six-new-features/

If you build (or want to build) data-driven web and mobile apps and need real-time updates and the ability to work offline, you should take a look at AWS AppSync. Announced in preview form at AWS re:Invent 2017 and described in depth here, AWS AppSync is designed for use in iOS, Android, JavaScript, and React Native apps. AWS AppSync is built around GraphQL, an open, standardized query language that makes it easy for your applications to request the precise data that they need from the cloud.

I’m happy to announce that the preview period is over and that AWS AppSync is now generally available and production-ready, with six new features that will simplify and streamline your application development process:

Console Log Access – You can now see the CloudWatch Logs entries that are created when you test your GraphQL queries, mutations, and subscriptions from within the AWS AppSync Console.

Console Testing with Mock Data – You can now create and use mock context objects in the console for testing purposes.

Subscription Resolvers – You can now create resolvers for AWS AppSync subscription requests, just as you can already do for query and mutate requests.

Batch GraphQL Operations for DynamoDB – You can now make use of DynamoDB’s batch operations (BatchGetItem and BatchWriteItem) across one or more tables. in your resolver functions.

CloudWatch Support – You can now use Amazon CloudWatch Metrics and CloudWatch Logs to monitor calls to the AWS AppSync APIs.

CloudFormation Support – You can now define your schemas, data sources, and resolvers using AWS CloudFormation templates.

A Brief AppSync Review
Before diving in to the new features, let’s review the process of creating an AWS AppSync API, starting from the console. I click Create API to begin:

I enter a name for my API and (for demo purposes) choose to use the Sample schema:

The schema defines a collection of GraphQL object types. Each object type has a set of fields, with optional arguments:

If I was creating an API of my own I would enter my schema at this point. Since I am using the sample, I don’t need to do this. Either way, I click on Create to proceed:

The GraphQL schema type defines the entry points for the operations on the data. All of the data stored on behalf of a particular schema must be accessible using a path that begins at one of these entry points. The console provides me with an endpoint and key for my API:

It also provides me with guidance and a set of fully functional sample apps that I can clone:

When I clicked Create, AWS AppSync created a pair of Amazon DynamoDB tables for me. I can click Data Sources to see them:

I can also see and modify my schema, issue queries, and modify an assortment of settings for my API.

Let’s take a quick look at each new feature…

Console Log Access
The AWS AppSync Console already allows me to issue queries and to see the results, and now provides access to relevant log entries.In order to see the entries, I must enable logs (as detailed below), open up the LOGS, and check the checkbox. Here’s a simple mutation query that adds a new event. I enter the query and click the arrow to test it:

I can click VIEW IN CLOUDWATCH for a more detailed view:

To learn more, read Test and Debug Resolvers.

Console Testing with Mock Data
You can now create a context object in the console where it will be passed to one of your resolvers for testing purposes. I’ll add a testResolver item to my schema:

Then I locate it on the right-hand side of the Schema page and click Attach:

I choose a data source (this is for testing and the actual source will not be accessed), and use the Put item mapping template:

Then I click Select test context, choose Create New Context, assign a name to my test content, and click Save (as you can see, the test context contains the arguments from the query along with values to be returned for each field of the result):

After I save the new Resolver, I click Test to see the request and the response:

Subscription Resolvers
Your AWS AppSync application can monitor changes to any data source using the @aws_subscribe GraphQL schema directive and defining a Subscription type. The AWS AppSync client SDK connects to AWS AppSync using MQTT over Websockets and the application is notified after each mutation. You can now attach resolvers (which convert GraphQL payloads into the protocol needed by the underlying storage system) to your subscription fields and perform authorization checks when clients attempt to connect. This allows you to perform the same fine grained authorization routines across queries, mutations, and subscriptions.

To learn more about this feature, read Real-Time Data.

Batch GraphQL Operations
Your resolvers can now make use of DynamoDB batch operations that span one or more tables in a region. This allows you to use a list of keys in a single query, read records multiple tables, write records in bulk to multiple tables, and conditionally write or delete related records across multiple tables.

In order to use this feature the IAM role that you use to access your tables must grant access to DynamoDB’s BatchGetItem and BatchPutItem functions.

To learn more, read the DynamoDB Batch Resolvers tutorial.

CloudWatch Logs Support
You can now tell AWS AppSync to log API requests to CloudWatch Logs. Click on Settings and Enable logs, then choose the IAM role and the log level:

CloudFormation Support
You can use the following CloudFormation resource types in your templates to define AWS AppSync resources:

AWS::AppSync::GraphQLApi – Defines an AppSync API in terms of a data source (an Amazon Elasticsearch Service domain or a DynamoDB table).

AWS::AppSync::ApiKey – Defines the access key needed to access the data source.

AWS::AppSync::GraphQLSchema – Defines a GraphQL schema.

AWS::AppSync::DataSource – Defines a data source.

AWS::AppSync::Resolver – Defines a resolver by referencing a schema and a data source, and includes a mapping template for requests.

Here’s a simple schema definition in YAML form:

  AppSyncSchema:
    Type: "AWS::AppSync::GraphQLSchema"
    DependsOn:
      - AppSyncGraphQLApi
    Properties:
      ApiId: !GetAtt AppSyncGraphQLApi.ApiId
      Definition: |
        schema {
          query: Query
          mutation: Mutation
        }
        type Query {
          singlePost(id: ID!): Post
          allPosts: [Post]
        }
        type Mutation {
          putPost(id: ID!, title: String!): Post
        }
        type Post {
          id: ID!
          title: String!
        }

Available Now
These new features are available now and you can start using them today! Here are a couple of blog posts and other resources that you might find to be of interest:

Jeff;

 

 

Running ActiveMQ in a Hybrid Cloud Environment with Amazon MQ

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/running-activemq-in-a-hybrid-cloud-environment-with-amazon-mq/

This post courtesy of Greg Share, AWS Solutions Architect

Many organizations, particularly enterprises, rely on message brokers to connect and coordinate different systems. Message brokers enable distributed applications to communicate with one another, serving as the technological backbone for their IT environment, and ultimately their business services. Applications depend on messaging to work.

In many cases, those organizations have started to build new or “lift and shift” applications to AWS. In some cases, there are applications, such as mainframe systems, too costly to migrate. In these scenarios, those on-premises applications still need to interact with cloud-based components.

Amazon MQ is a managed message broker service for ActiveMQ that enables organizations to send messages between applications in the cloud and on-premises to enable hybrid environments and application modernization. For example, you can invoke AWS Lambda from queues and topics managed by Amazon MQ brokers to integrate legacy systems with serverless architectures. ActiveMQ is an open-source message broker written in Java that is packaged with clients in multiple languages, Java Message Server (JMS) client being one example.

This post shows you can use Amazon MQ to integrate on-premises and cloud environments using the network of brokers feature of ActiveMQ. It provides configuration parameters for a one-way duplex connection for the flow of messages from an on-premises ActiveMQ message broker to Amazon MQ.

ActiveMQ and the network of brokers

First, look at queues within ActiveMQ and then at the network of brokers as a mechanism to distribute messages.

The network of brokers behaves differently from models such as physical networks. The key consideration is that the production (sending) of a message is disconnected from the consumption of that message. Think of the delivery of a parcel: The parcel is sent by the supplier (producer) to the end customer (consumer). The path it took to get there is of little concern to the customer, as long as it receives the package.

The same logic can be applied to the network of brokers. Here’s how you build the flow from a simple message to a queue and build toward a network of brokers. Before you look at setting up a hybrid connection, I discuss how a broker processes messages in a simple scenario.

When a message is sent from a producer to a queue on a broker, the following steps occur:

  1. A message is sent to a queue from the producer.
  2. The broker persists this in its store or journal.
  3. At this point, an acknowledgement (ACK) is sent to the producer from the broker.

When a consumer looks to consume the message from that same queue, the following steps occur:

  1. The message listener (consumer) calls the broker, which creates a subscription to the queue.
  2. Messages are fetched from the message store and sent to the consumer.
  3. The consumer acknowledges that the message has been received before processing it.
  4. Upon receiving the ACK, the broker sets the message as having been consumed. By default, this deletes it from the queue.
    • You can set the consumer to ACK after processing by setting up transaction management or handle it manually using Session.CLIENT_ACKNOWLEDGE.

Static propagation

I now introduce the concept of static propagation with the network of brokers as the mechanism for message transfer from on-premises brokers to Amazon MQ.  Static propagation refers to message propagation that occurs in the absence of subscription information. In this case, the objective is to transfer messages arriving at your selected on-premises broker to the Amazon MQ broker for consumption within the cloud environment.

After you configure static propagation with a network of brokers, the following occurs:

  1. The on-premises broker receives a message from a producer for a specific queue.
  2. The on-premises broker sends (statically propagates) the message to the Amazon MQ broker.
  3. The Amazon MQ broker sends an acknowledgement to the on-premises broker, which marks the message as having been consumed.
  4. Amazon MQ holds the message in its queue ready for consumption.
  5. A consumer connects to Amazon MQ broker, subscribes to the queue in which the message resides, and receives the message.
  6. Amazon MQ broker marks the message as having been consumed.

Getting started

The first step is creating an Amazon MQ broker.

  1. Sign in to the Amazon MQ console and launch a new Amazon MQ broker.
  2. Name your broker and choose Next step.
  3. For Broker instance type, choose your instance size:
    mq.t2.micro
    mq.m4.large
  4. For Deployment mode, enter one of the following:
    Single-instance broker for development and test implementations (recommended)
    Active/standby broker for high availability in production environments
  5. Scroll down and enter your user name and password.
  6. Expand Advanced Settings.
  7. For VPC, Subnet, and Security Group, pick the values for the resources in which your broker will reside.
  8. For Public Accessibility, choose Yes, as connectivity is internet-based. Another option would be to use private connectivity between your on-premises network and the VPC, an example being an AWS Direct Connect or VPN connection. In that case, you could set Public Accessibility to No.
  9. For Maintenance, leave the default value, No preference.
  10. Choose Create Broker. Wait several minutes for the broker to be created.

After creation is complete, you see your broker listed.

For connectivity to work, you must configure the security group where Amazon MQ resides. For this post, I focus on the OpenWire protocol.

For Openwire connectivity, allow port 61617 access for Amazon MQ from your on-premises ActiveMQ broker source IP address. For alternate protocols, see the Amazon MQ broker configuration information for the ports required:

OpenWire – ssl://xxxxxxx.xxx.com:61617
AMQP – amqp+ssl:// xxxxxxx.xxx.com:5671
STOMP – stomp+ssl:// xxxxxxx.xxx.com:61614
MQTT – mqtt+ssl:// xxxxxxx.xxx.com:8883
WSS – wss:// xxxxxxx.xxx.com:61619

Configuring the network of brokers

Configuring the network of brokers with static propagation occurs on the on-premises broker by applying changes to the following file:
<activemq install directory>/conf activemq.xml

Network connector

This is the first configuration item required to enable a network of brokers. It is only required on the on-premises broker, which initiates and creates the connection with Amazon MQ. This connection, after it’s established, enables the flow of messages in either direction between the on-premises broker and Amazon MQ. The focus of this post is the uni-directional flow of messages from the on-premises broker to Amazon MQ.

The default activemq.xml file does not include the network connector configuration. Add this with the networkConnector element. In this scenario, edit the on-premises broker activemq.xml file to include the following information between <systemUsage> and <transportConnectors>:

<networkConnectors>
             <networkConnector 
                name="Q:source broker name->target broker name"
                duplex="false" 
                uri="static:(ssl:// aws mq endpoint:61617)" 
                userName="username"
                password="password" 
                networkTTL="2" 
                dynamicOnly="false">
                <staticallyIncludedDestinations>
                    <queue physicalName="queuename"/>
                </staticallyIncludedDestinations> 
                <excludedDestinations>
                      <queue physicalName=">" />
                </excludedDestinations>
             </networkConnector> 
     <networkConnectors>

The highlighted components are the most important elements when configuring your on-premises broker.

  • name – Name of the network bridge. In this case, it specifies two things:
    • That this connection relates to an ActiveMQ queue (Q) as opposed to a topic (T), for reference purposes.
    • The source broker and target broker.
  • duplex –Setting this to false ensures that messages traverse uni-directionally from the on-premises broker to Amazon MQ.
  • uri –Specifies the remote endpoint to which to connect for message transfer. In this case, it is an Openwire endpoint on your Amazon MQ broker. This information could be obtained from the Amazon MQ console or via the API.
  • username and password – The same username and password configured when creating the Amazon MQ broker, and used to access the Amazon MQ ActiveMQ console.
  • networkTTL – Number of brokers in the network through which messages and subscriptions can pass. Leave this setting at the current value, if it is already included in your broker connection.
  • staticallyIncludedDestinations > queue physicalName – The destination ActiveMQ queue for which messages are destined. This is the queue that is propagated from the on-premises broker to the Amazon MQ broker for message consumption.

After the network connector is configured, you must restart the ActiveMQ service on the on-premises broker for the changes to be applied.

Verify the configuration

There are a number of places within the ActiveMQ console of your on-premises and Amazon MQ brokers to browse to verify that the configuration is correct and the connection has been established.

On-premises broker

Launch the ActiveMQ console of your on-premises broker and navigate to Network. You should see an active network bridge similar to the following:

This identifies that the connection between your on-premises broker and your Amazon MQ broker is up and running.

Now navigate to Connections and scroll to the bottom of the page. Under the Network Connectors subsection, you should see a connector labeled with the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

Amazon MQ broker

Launch the ActiveMQ console of your Amazon MQ broker and navigate to Connections. Scroll to the Connections openwire subsection and you should see a connection specified that references the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

If you configured the uri: for AMQP, STOMP, MQTT, or WSS as opposed to Openwire, you would see this connection under the corresponding section of the Connections page.

Testing your message flow

The setup described outlines a way for messages produced on premises to be propagated to the cloud for consumption in the cloud. This section provides steps on verifying the message flow.

Verify that the queue has been created

After you specify this queue name as staticallyIncludedDestinations > queue physicalName: and your ActiveMQ service starts, you see the following on your on-premises ActiveMQ console Queues page.

As you can see, no messages have been sent but you have one consumer listed. If you then choose Active Consumers under the Views column, you see Active Consumers for TestingQ.

This is telling you that your Amazon MQ broker is a consumer of your on-premises broker for the testing queue.

Produce and send a message to the on-premises broker

Now, produce a message on an on-premises producer and send it to your on-premises broker to a queue named TestingQ. If you navigate back to the queues page of your on-premises ActiveMQ console, you see that the messages enqueued and messages dequeued column count for your TestingQ queue have changed:

What this means is that the message originating from the on-premises producer has traversed the on-premises broker and propagated immediately to the Amazon MQ broker. At this point, the message is no longer available for consumption from the on-premises broker.

If you access the ActiveMQ console of your Amazon MQ broker and navigate to the Queues page, you see the following for the TestingQ queue:

This means that the message originally sent to your on-premises broker has traversed the network of brokers unidirectional network bridge, and is ready to be consumed from your Amazon MQ broker. The indicator is the Number of Pending Messages column.

Consume the message from an Amazon MQ broker

Connect to the Amazon MQ TestingQ queue from a consumer within the AWS Cloud environment for message consumption. Log on to the ActiveMQ console of your Amazon MQ broker and navigate to the Queue page:

As you can see, the Number of Pending Messages column figure has changed to 0 as that message has been consumed.

This diagram outlines the message lifecycle from the on-premises producer to the on-premises broker, traversing the hybrid connection between the on-premises broker and Amazon MQ, and finally consumption within the AWS Cloud.

Conclusion

This post focused on an ActiveMQ-specific scenario for transferring messages within an ActiveMQ queue from an on-premises broker to Amazon MQ.

For other on-premises brokers, such as IBM MQ, another approach would be to run ActiveMQ on-premises broker and use JMS bridging to IBM MQ, while using the approach in this post to forward to Amazon MQ. Yet another approach would be to use Apache Camel for more sophisticated routing.

I hope that you have found this example of hybrid messaging between an on-premises environment in the AWS Cloud to be useful. Many customers are already using on-premises ActiveMQ brokers, and this is a great use case to enable hybrid cloud scenarios.

To learn more, see the Amazon MQ website and Developer Guide. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

 

This IoT Pet Monitor barks back

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/iot-pet-monitor/

Jennifer Fox, founder of FoxBot Industries, uses a Raspberry Pi pet monitor to check the sound levels of her home while she is out, allowing her to keep track of when her dog Marley gets noisy or agitated, and to interact with the gorgeous furball accordingly.

Bark Back Project Demo

A quick overview and demo of the Bark Back, a project to monitor and interact with Check out the full tutorial here: https://learn.sparkfun.com/tutorials/bark-back-interactive-pet-monitor For any licensing requests please contact [email protected]

Marley, bark!

Using a Raspberry Pi 3, speakers, SparkFun’s MEMS microphone breakout board, and an analogue-to-digital converter (ADC), the IoT Pet Monitor is fairly easy to recreate, all thanks to Jennifer’s full tutorial on the FoxBot website.

Building the pet monitor

In a nutshell, once the Raspberry Pi and the appropriate bits and pieces are set up, you’ll need to sign up at CloudMQTT — it’s free if you select the Cute Cat account. CloudMQTT will create an invisible bridge between your home and wherever you are that isn’t home, so that you can check in on your pet monitor.

Screenshot CloudMQTT account set-up — IoT Pet Monitor Bark Back Raspberry Pi

Image c/o FoxBot Industries

Within the project code, you’ll be able to calculate the peak-to-peak amplitude of sound the microphone picks up. Then you can decide how noisy is too noisy when it comes to the occasional whine and bark of your beloved pup.

MEMS microphone breakout board — IoT Pet Monitor Bark Back Raspberry Pi

The MEMS microphone breakout board collects sound data and relays it back to the Raspberry Pi via the ADC.
Image c/o FoxBot Industries

Next you can import sounds to a preset song list that will be played back when the volume rises above your predefined threshold. As Jennifer states in the tutorial, the sounds can easily be recorded via apps such as Garageband, or even on your mobile phone.

Using the pet monitor

Whenever the Bark Back IoT Pet Monitor is triggered to play back audio, this information is fed to the CloudMQTT service, allowing you to see if anything is going on back home.

A sitting dog with a doll in its mouth — IoT Pet Monitor Bark Back Raspberry Pi

*incoherent coos of affection from Alex*
Image c/o FoxBot Industries

And as Jennifer recommends, a update of the project could include a camera or sensors to feed back more information about your home environment.

If you’ve created something similar, be sure to let us know in the comments. And if you haven’t, but you’re now planning to build your own IoT pet monitor, be sure to let us know in the comments. And if you don’t have a pet but just want to say hi…that’s right, be sure to let us know in the comments.

The post This IoT Pet Monitor barks back appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 28

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/05/timeshiftgrafanabuzz-1w-issue-28/

Happy new year! Grafana Labs is getting back in the swing of things after taking some time off to celebrate 2017, and spending time with family and friends. We’re diligently working on the new Grafana v5.0 release (planning v5.0 beta release by end of January), which includes a ton of new features, a new layout engine, and a polished UI. We’d love to hear your feedback!


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Why Observability Matters – Now and in the Future: Our own Carl Bergquist teamed up with Neil Gehani, Director of Product at Weaveworks to discuss best practices on how to get started with monitoring your application and infrastructure. This video focuses on modern containerized applications instrumented to use Prometheus to generate metrics and Grafana to visualize them.

How to Install and Secure Grafana on Ubuntu 16.04: In this tutorial, you’ll learn how to install and secure Grafana with a SSL certificate and a Nginx reverse proxy, then you’ll modify Grafana’s default settings for even tighter security.

Monitoring Informix with Grafana: Ben walks us through how to use Grafana to visualize data from IBM Informix and offers a practical demonstration using Docker containers. He also talks about his philosophy of sharing dashboards across teams, important metrics to collect, and how he would like to improve his monitoring stack.

Monitor your hosts with Glances + InfluxDB + Grafana: Glances is a cross-platform system monitoring tool written in Python. This article takes you step by step through the pieces of the stack, installation, confirguration and provides a sample dashboard to get you up and running.


GrafanaCon Tickets are Going Fast!

Lock in your seat for GrafanaCon EU while there are still tickets avaialable! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

We have some exciting talks lined up from Google, CERN, Bloomberg, eBay, Red Hat, Tinder, Fastly, Automattic, Prometheus, InfluxData, Percona and more! You can see the full list of speakers below, but be sure to get your ticket now.

Get Your Ticket Now

GrafanaCon EU will feature talks from:

“Google Bigtable”
Misha Brukman
PROJECT MANAGER,
GOOGLE CLOUD
GOOGLE

“Monitoring at Bloomberg”
Stig Sorensen
HEAD OF TELEMETRY
BLOOMBERG

“Monitoring at Bloomberg”
Sean Hanson
SOFTWARE DEVELOPER
BLOOMBERG

“Monitoring Tinder’s Billions of Swipes with Grafana”
Utkarsh Bhatnagar
SR. SOFTWARE ENGINEER
TINDER

“Grafana at CERN”
Borja Garrido
PROJECT ASSOCIATE
CERN

“Monitoring the Huge Scale at Automattic”
Abhishek Gahlot
SOFTWARE ENGINEER
Automattic

“Real-time Engagement During the 2016 US Presidential Election”
Anna MacLachlan
CONTENT MARKETING MANAGER
Fastly

“Real-time Engagement During the 2016 US Presidential Election”
Gerlando Piro
FRONT END DEVELOPER
Fastly

“Grafana v5 and the Future”
Torkel Odegaard
CREATOR | PROJECT LEAD
GRAFANA

“Prometheus for Monitoring Metrics”
Brian Brazil
FOUNDER
ROBUST PERCEPTION

“What We Learned Integrating Grafana with Prometheus”
Peter Zaitsev
CO-FOUNDER | CEO
PERCONA

“The Biz of Grafana”
Raj Dutt
CO-FOUNDER | CEO
GRAFANA LABS

“What’s New In Graphite”
Dan Cech
DIR, PLATFORM SERVICES
GRAFANA LABS

“The Design of IFQL, the New Influx Functional Query Language”
Paul Dix
CO-FOUNTER | CTO
INFLUXDATA

“Writing Grafana Dashboards with Jsonnet”
Julien Pivotto
OPEN SOURCE CONSULTANT
INUITS

“Monitoring AI Platform at eBay”
Deepak Vasthimal
MTS-2 SOFTWARE ENGINEER
EBAY

“Running a Power Plant with Grafana”
Ryan McKinley
DEVELOPER
NATEL ENERGY

“Performance Metrics and User Experience: A “Tinder” Experience”
Susanne Greiner
DATA SCIENTIST
WÜRTH PHOENIX S.R.L.

“Analyzing Performance of OpenStack with Grafana Dashboards”
Alex Krzos
SENIOR SOFTWARE ENGINEER
RED HAT INC.

“Storage Monitoring at Shell Upstream”
Arie Jan Kraai
STORAGE ENGINEER
SHELL TECHNICAL LANDSCAPE SERVICE

“The RED Method: How To Instrument Your Services”
Tom Wilkie
FOUNDER
KAUSAL

“Grafana Usage in the Quality Assurance Process”
Andrejs Kalnacs
LEAD SOFTWARE DEVELOPER IN TEST
EVOLUTION GAMING

“Using Prometheus and Grafana for Monitoring my Power Usage”
Erwin de Keijzer
LINUX ENGINEER
SNOW BV

“Weather, Power & Market Forecasts with Grafana”
Max von Roden
DATA SCIENTIST
ENERGY WEATHER

“Weather, Power & Market Forecasts with Grafana”
Steffen Knott
HEAD OF IT
ENERGY WEATHER

“Inherited Technical Debt – A Tale of Overcoming Enterprise Inertia”
Jordan J. Hamel
HEAD OF MONITORING PLATFORMS
AMGEN

“Grafanalib: Dashboards as Code”
Jonathan Lange
VP OF ENGINEERING
WEAVEWORKS

“The Journey of Shifting the MQTT Broker HiveMQ to Kubernetes”
Arnold Bechtoldt
SENIOR SYSTEMS ENGINEER
INOVEX

“Graphs Tell Stories”
Blerim Sheqa
SENIOR DEVELOPER
NETWAYS

[email protected] or How to Store Millions of Metrics per Second”
Vladimir Smirnov
SYSTEM ADMINISTRATOR
Booking.com


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Awesome! Let us know if you have any questions – we’re happy to help out. We also have a bunch of screencasts to help you get going.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

That’s a wrap! Let us know what you think about timeShift. Submit a comment on this article below, or post something at our community forum. See you next year!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

HiveMQ 3.3.2 released

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/hivemq-3-3-2-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.3.2. This is a maintenance release for the 3.3 series and brings the following improvements:

  • Improved unsubscribe performance
  • Webinterface now shows a warning if different HiveMQ versions are in a cluster
  • At Startup HiveMQ shows the enabled cipher suites and protocols for TLS listeners in the log
  • The Web UI Dashboard now shows MQTT Publishes instead of all MQTT messages in the graphs
  • Updated integrated native SSL/TLS library to latest version
  • Improved message ordering while the cluster topology changes
  • Fixed a cosmetic NullPointerException with background cleanup jobs
  • Fixed an issue where Web UI Popups could not be closed on IE/Edge and Safari
  • Fixed an issue which could lead to an IllegalArgumentException with a QoS 0 message in a rare edge-case
  • Improved persistence migrations for updating single HiveMQ deployments
  • Fixed a reference counting issue
  • Fixed an issue with rolling upgrades if the AsyncMetricService is used while the update is in progress

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.3.x user.

Have a great day,
The HiveMQ Team

MQTT 5: Is it time to upgrade to MQTT 5 yet?

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-time-to-upgrade-yet/

MQTT 5 - Is it time to upgrade yet?

Is it time to upgrade to MQTT 5 yet?

Welcome to this week’s blog post! After last week’s Introduction to MQTT 5, many readers wondered when the successor to MQTT 3.1.1 is ready for prime time and can be used in future and existing projects.

Before we try to answer the question in more detail, we’d love to hear your thoughts about upgrading to MQTT 5. We prepared a small survey below. Let us know how your MQTT 5 upgrading plans are!

The MQTT 5 OASIS Standard

As of late December 2017, the MQTT 5 specification is not available as an official “Committee Specification” yet. In other words: MQTT 5 is not available yet officially. The foundation for every implementation of the standard is, that the Technical Committee at OASIS officially releases the standard.

The good news: Although no official version of the standard is available yet, fundamental changes to the current state of the specification are not expected.
The Public Review phase of the “Committee Specification Draft 2” finished without any major comments or issues. We at HiveMQ expect the MQTT 5 standard to be released in very late December 2017 or January 2018.

Current state of client libraries

To start using MQTT 5, you need two participants: An MQTT 5 client library implementation in your programming language(s) of choice and an MQTT 5 broker implementation (like HiveMQ). If both components support the new standard, you are good to go and can use the new version in your projects.

When it comes to MQTT libraries, Eclipse Paho is the one-stop shop for MQTT clients in most programming languages. A recent Paho mailing list entry stated that Paho plans to release MQTT 5 client libraries end of June 2018 for the following programming languages:

  • C (+ embedded C)
  • Java
  • Go
  • C++

If you’re feeling adventurous, at least the Java Paho client has preliminary MQTT 5 support available. You can play around with the API and get a feel about the upcoming Paho version. Just build the library from source and test it, but be aware that this is not safe for production use.

There is also a very basic test broker implementation available at Eclipse Paho which can be used for playing around. This is of course only for very basic tests and does not support all MQTT 5 features yet. If you’re planning to write your own library, this may be a good tool to test your implementation against.

There are of other individual MQTT library projects which may be worth to check out. As of December 2017 most of these libraries don’t have an MQTT 5 roadmap published yet.

HiveMQ and MQTT 5

You can’t use the new version of the MQTT protocol only by having a client that is ready for MQTT 5. The counterpart, the MQTT broker, also needs to fully support the new protocol version. At the time of writing, no broker is MQTT 5 ready yet.

HiveMQ was the first broker to fully support version 3.1.1 of MQTT and of course here at HiveMQ we are committed to give our customers the advantage of the new features of version 5 of the protocol as soon as possible and viable.

We are going to provide an Early Access version of the upcoming HiveMQ generation with MQTT 5 support by May/June 2018. If you’re a library developer or want to go live with the new protocol version as soon as possible: The Early Access version is for you. Add yourself to the Early Access Notification List and we’ll notify you when the Early Access version is available.

We expect to release the upcoming HiveMQ generation in the third quarter of 2018 with full support of ALL MQTT 5 features at scale in an interoperable way with previous MQTT versions.

When is MQTT 5 ready for prime time?

MQTT is typically used in mission critical environments where it’s not acceptable that parts of the infrastructure, broker or client, are unreliable or have some rough edges. So it’s typically not advisable to be the very first to try out new things in a critical production environment.

Here at HiveMQ we expect that the first users will go live to production in late Q3 (September) 2018 and in the subsequent months. After the releases of the Paho library in June and the HiveMQ Early Access version, the adoption of MQTT 5 is expected to increase rapidly.

So, is MQTT 5 ready for prime time yet (as of December 2017)? No.

Will the new version of the protocol be suitable for production environments in the second half of 2018: Yes, definitely.

Upcoming topics in this series

We will continue this blog post series in January after the European Christmas Holidays. To kick-off the technical part of the series, we will take a look at the foundational changes of the MQTT protocol. And after that, we will release one blog post per week that will thoroughly review and inspect one new feature in detail together with best practices and fun trivia.

If you want us to send the next and all upcoming articles directly into your inbox, just use the newsletter subscribe form below.

P.S. Don’t forget to let us know if MQTT 5 is of interest for you by participating in this quick poll.

Have an awesome week,
The HiveMQ Team

MQTT 5: Introduction to MQTT 5

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-introduction-to-mqtt-5/

MQTT 5 Introduction

Introduction to MQTT 5

Welcome to our brand new blog post series MQTT 5 – Features and Hidden Gems. Without doubt, the MQTT protocol is the most popular and best received Internet of Things protocol as of today (see the Google Trends Chart below), supporting large scale use cases ranging from Connected Cars, Manufacturing Systems, Logistics, Military Use Cases to Enterprise Chat Applications, Mobile Apps and connecting constrained IoT devices. Of course, with huge amounts of production deployments, the wish list for future versions of the MQTT protocol grew bigger and bigger.

MQTT 5 is by far the most extensive and most feature-rich update to the MQTT protocol specification ever. We are going to explore all hidden gems and protocol features with use case discussion and useful background information – one blog post at a time.

Be sure to read the MQTT Essentials Blog Post series first before diving into our new MQTT 5 series. To get the most out of the new blog posts, it’s important to have a basic understanding of the MQTT 3.1.1 protocol as we are going to highlight key changes as well as all improvements.

Glenn’s Take on re:Invent 2017 Part 1

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-2017-part-1/

GREETINGS FROM LAS VEGAS

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We have a lot of exciting announcements this week. I’m going to post to the AWS Architecture blog each day with my take on what’s interesting about some of the announcements from a cloud architectural perspective.

Why not start at the beginning? At the Midnight Madness launch on Sunday night, we announced Amazon Sumerian, our platform for VR, AR, and mixed reality. The hype around VR/AR has existed for many years, though for me, it is a perfect example of how a working end-to-end solution often requires innovation from multiple sources. For AR/VR to be successful, we need many components to come together in a coherent manner to provide a great experience.

First, we need lightweight, high-definition goggles with motion tracking that are comfortable to wear. Second, we need to track movement of our body and hands in a 3-D space so that we can interact with virtual objects in the virtual world. Third, we need to build the virtual world itself and populate it with assets and define how the interactions will work and connect with various other systems.

There has been rapid development of the physical devices for AR/VR, ranging from iOS devices to Oculus Rift and HTC Vive, which provide excellent capabilities for the first and second components defined above. With the launch of Amazon Sumerian we are solving for the third area, which will help developers easily build their own virtual worlds and start experimenting and innovating with how to apply AR/VR in new ways.

Already, within 48 hours of Amazon Sumerian being announced, I have had multiple discussions with customers and partners around some cool use cases where VR can help in training simulations, remote-operator controls, or with new ideas around interacting with complex visual data sets, which starts bringing concepts straight out of sci-fi movies into the real (virtual) world. I am really excited to see how Sumerian will unlock the creative potential of developers and where this will lead.

Amazon MQ
I am a huge fan of distributed architectures where asynchronous messaging is the backbone of connecting the discrete components together. Amazon Simple Queue Service (Amazon SQS) is one of my favorite services due to its simplicity, scalability, performance, and the incredible flexibility of how you can use Amazon SQS in so many different ways to solve complex queuing scenarios.

While Amazon SQS is easy to use when building cloud-native applications on AWS, many of our customers running existing applications on-premises required support for different messaging protocols such as: Java Message Service (JMS), .Net Messaging Service (NMS), Advanced Message Queuing Protocol (AMQP), MQ Telemetry Transport (MQTT), Simple (or Streaming) Text Orientated Messaging Protocol (STOMP), and WebSockets. One of the most popular applications for on-premise message brokers is Apache ActiveMQ. With the release of Amazon MQ, you can now run Apache ActiveMQ on AWS as a managed service similar to what we did with Amazon ElastiCache back in 2012. For me, there are two compelling, major benefits that Amazon MQ provides:

  • Integrate existing applications with cloud-native applications without having to change a line of application code if using one of the supported messaging protocols. This removes one of the biggest blockers for integration between the old and the new.
  • Remove the complexity of configuring Multi-AZ resilient message broker services as Amazon MQ provides out-of-the-box redundancy by always storing messages redundantly across Availability Zones. Protection is provided against failure of a broker through to complete failure of an Availability Zone.

I believe that Amazon MQ is a major component in the tools required to help you migrate your existing applications to AWS. Having set up cross-data center Apache ActiveMQ clusters in the past myself and then testing to ensure they work as expected during critical failure scenarios, technical staff working on migrations to AWS benefit from the ease of deploying a fully redundant, managed Apache ActiveMQ cluster within minutes.

Who would have thought I would have been so excited to revisit Apache ActiveMQ in 2017 after using SQS for many, many years? Choice is a wonderful thing.

Amazon GuardDuty
Maintaining application and information security in the modern world is increasingly complex and is constantly evolving and changing as new threats emerge. This is due to the scale, variety, and distribution of services required in a competitive online world.

At Amazon, security is our number one priority. Thus, we are always looking at how we can increase security detection and protection while simplifying the implementation of advanced security practices for our customers. As a result, we released Amazon GuardDuty, which provides intelligent threat detection by using a combination of multiple information sources, transactional telemetry, and the application of machine learning models developed by AWS. One of the biggest benefits of Amazon GuardDuty that I appreciate is that enabling this service requires zero software, agents, sensors, or network choke points. which can all impact performance or reliability of the service you are trying to protect. Amazon GuardDuty works by monitoring your VPC flow logs, AWS CloudTrail events, DNS logs, as well as combing other sources of security threats that AWS is aggregating from our own internal and external sources.

The use of machine learning in Amazon GuardDuty allows it to identify changes in behavior, which could be suspicious and require additional investigation. Amazon GuardDuty works across all of your AWS accounts allowing for an aggregated analysis and ensuring centralized management of detected threats across accounts. This is important for our larger customers who can be running many hundreds of AWS accounts across their organization, as providing a single common threat detection of their organizational use of AWS is critical to ensuring they are protecting themselves.

Detection, though, is only the beginning of what Amazon GuardDuty enables. When a threat is identified in Amazon GuardDuty, you can configure remediation scripts or trigger Lambda functions where you have custom responses that enable you to start building automated responses to a variety of different common threats. Speed of response is required when a security incident may be taking place. For example, Amazon GuardDuty detects that an Amazon Elastic Compute Cloud (Amazon EC2) instance might be compromised due to traffic from a known set of malicious IP addresses. Upon detection of a compromised EC2 instance, we could apply an access control entry restricting outbound traffic for that instance, which stops loss of data until a security engineer can assess what has occurred.

Whether you are a customer running a single service in a single account, or a global customer with hundreds of accounts with thousands of applications, or a startup with hundreds of micro-services with hourly release cycle in a devops world, I recommend enabling Amazon GuardDuty. We have a 30-day free trial available for all new customers of this service. As it is a monitor of events, there is no change required to your architecture within AWS.

Stay tuned for tomorrow’s post on AWS Media Services and Amazon Neptune.

 

Glenn during the Tour du Mont Blanc

Presenting AWS IoT Analytics: Delivering IoT Analytics at Scale and Faster than Ever Before

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-aws-iot-analytics/

One of the technology areas I thoroughly enjoy is the Internet of Things (IoT). Even as a child I used to infuriate my parents by taking apart the toys they would purchase for me to see how they worked and if I could somehow put them back together. It seems somehow I was destined to end up the tough and ever-changing world of technology. Therefore, it’s no wonder that I am really enjoying learning and tinkering with IoT devices and technologies. It combines my love of development and software engineering with my curiosity around circuits, controllers, and other facets of the electrical engineering discipline; even though an electrical engineer I can not claim to be.

Despite all of the information that is collected by the deployment of IoT devices and solutions, I honestly never really thought about the need to analyze, search, and process this data until I came up against a scenario where it became of the utmost importance to be able to search and query through loads of sensory data for an anomaly occurrence. Of course, I understood the importance of analytics for businesses to make accurate decisions and predictions to drive the organization’s direction. But it didn’t occur to me initially, how important it was to make analytics an integral part of my IoT solutions. Well, I learned my lesson just in time because this re:Invent a service is launching to make it easier for anyone to process and analyze IoT messages and device data.

 

Hello, AWS IoT Analytics!  AWS IoT Analytics is a fully managed service of AWS IoT that provides advanced data analysis of data collected from your IoT devices.  With the AWS IoT Analytics service, you can process messages, gather and store large amounts of device data, as well as, query your data. Also, the new AWS IoT Analytics service feature integrates with Amazon Quicksight for visualization of your data and brings the power of machine learning through integration with Jupyter Notebooks.

Benefits of AWS IoT Analytics

  • Helps with predictive analysis of data by providing access to pre-built analytical functions
  • Provides ability to visualize analytical output from service
  • Provides tools to clean up data
  • Can help identify patterns in the gathered data

Be In the Know: IoT Analytics Concepts

  • Channel: archives the raw, unprocessed messages and collects data from MQTT topics.
  • Pipeline: consumes messages from channels and allows message processing.
    • Activities: perform transformations on your messages including filtering attributes and invoking lambda functions advanced processing.
  • Data Store: Used as a queryable repository for processed messages. Provide ability to have multiple datastores for messages coming from different devices or locations or filtered by message attributes.
  • Data Set: Data retrieval view from a data store, can be generated by a recurring schedule. 

Getting Started with AWS IoT Analytics

First, I’ll create a channel to receive incoming messages.  This channel can be used to ingest data sent to the channel via MQTT or messages directed from the Rules Engine. To create a channel, I’ll select the Channels menu option and then click the Create a channel button.

I’ll name my channel, TaraIoTAnalyticsID and give the Channel a MQTT topic filter of Temperature. To complete the creation of my channel, I will click the Create Channel button.

Now that I have my Channel created, I need to create a Data Store to receive and store the messages received on the Channel from my IoT device. Remember you can set up multiple Data Stores for more complex solution needs, but I’ll just create one Data Store for my example. I’ll select Data Stores from menu panel and click Create a data store.

 

I’ll name my Data Store, TaraDataStoreID, and once I click the Create the data store button and I would have successfully set up a Data Store to house messages coming from my Channel.

Now that I have my Channel and my Data Store, I will need to connect the two using a Pipeline. I’ll create a simple pipeline that just connects my Channel and Data Store, but you can create a more robust pipeline to process and filter messages by adding Pipeline activities like a Lambda activity.

To create a pipeline, I’ll select the Pipelines menu option and then click the Create a pipeline button.

I will not add an Attribute for this pipeline. So I will click Next button.

As we discussed there are additional pipeline activities that I can add to my pipeline for the processing and transformation of messages but I will keep my first pipeline simple and hit the Next button.

The final step in creating my pipeline is for me to select my previously created Data Store and click Create Pipeline.

All that is left for me to take advantage of the AWS IoT Analytics service is to create an IoT rule that sends data to an AWS IoT Analytics channel.  Wow, that was a super easy process to set up analytics for IoT devices.

If I wanted to create a Data Set as a result of queries run against my data for visualization with Amazon Quicksight or integrate with Jupyter Notebooks to perform more advanced analytical functions, I can choose the Analyze menu option to bring up the screens to create data sets and access the Juypter Notebook instances.

Summary

As you can see, it was a very simple process to set up the advanced data analysis for AWS IoT. With AWS IoT Analytics, you have the ability to collect, visualize, process, query and store large amounts of data generated from your AWS IoT connected device. Additionally, you can access the AWS IoT Analytics service in a myriad of different ways; the AWS Command Line Interface (AWS CLI), the AWS IoT API, language-specific AWS SDKs, and AWS IoT Device SDKs.

AWS IoT Analytics is available today for you to dig into the analysis of your IoT data. To learn more about AWS IoT and AWS IoT Analytics go to the AWS IoT Analytics product page and/or the AWS IoT documentation.

Tara

Amazon MQ – Managed Message Broker Service for ActiveMQ

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Messaging holds the parts of a distributed application together, while also adding resiliency and enabling the implementation of highly scalable architectures. For example, earlier this year, Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) supported the processing of customer orders on Prime Day, collectively processing 40 billion messages at a rate of 10 million per second, with no customer-visible issues.

SQS and SNS have been used extensively for applications that were born in the cloud. However, many of our larger customers are already making use of open-sourced or commercially-licensed message brokers. Their applications are mission-critical, and so is the messaging that powers them. Our customers describe the setup and on-going maintenance of their messaging infrastructure as “painful” and report that they spend at least 10 staff-hours per week on this chore.

New Amazon MQ
Today we are launching Amazon MQ – a managed message broker service for Apache ActiveMQ that lets you get started in minutes with just three clicks! As you may know, ActiveMQ is a popular open-source message broker that is fast & feature-rich. It offers queues and topics, durable and non-durable subscriptions, push-based and poll-based messaging, and filtering.

As a managed service, Amazon MQ takes care of the administration and maintenance of ActiveMQ. This includes responsibility for broker provisioning, patching, failure detection & recovery for high availability, and message durability. With Amazon MQ, you get direct access to the ActiveMQ console and industry standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. This allows you to move from any message broker that uses these standards to Amazon MQ–along with the supported applications–without rewriting code.

You can create a single-instance Amazon MQ broker for development and testing, or an active/standby pair that spans AZs, with quick, automatic failover. Either way, you get data replication across AZs and a pay-as-you-go model for the broker instance and message storage.

Amazon MQ is a full-fledged part of the AWS family, including the use of AWS Identity and Access Management (IAM) for authentication and authorization to use the service API. You can use Amazon CloudWatch metrics to keep a watchful eye metrics such as queue depth and initiate Auto Scaling of your consumer fleet as needed.

Launching an Amazon MQ Broker
To get started, I open up the Amazon MQ Console, select the desired AWS Region, enter a name for my broker, and click on Next step:

Then I choose the instance type, indicate that I want to create a standby , and click on Create broker (I can select a VPC and fine-tune other settings in the Advanced settings section):

My broker will be created and ready to use in 5-10 minutes:

The URLs and endpoints that I use to access my broker are all available at a click:

I can access the ActiveMQ Web Console at the link provided:

The broker publishes instance, topic, and queue metrics to CloudWatch. Here are the instance metrics:

Available Now
Amazon MQ is available now and you can start using it today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Sydney) Regions.

The AWS Free Tier lets you use a single-AZ micro instance for up to 750 hours and to store up to 1 gigabyte each month, for one year. After that, billing is based on instance-hours and message storage, plus charges Internet data transfer if the broker is accessed from outside of AWS.

Jeff;

Да побутнем бъдещето

Post Syndicated from Йовко Ламбрев original https://yovko.net/push-the-future/

Да побутнем бъдещето

Представете си, че сме няколко години напред в бъдещето. Но не повече от пръстите на едната ви ръка. Искате да произведете някакъв продукт. Влизате във фабриката с проектната си документация, записана на някакъв цифров носител… Или не – дори нямате съвсем прецизна такава, но с помощта на инженерния екип тя скоро става готова, като междувременно сте изяснили всички въпроси, свързани с това какви точно материали да бъдат вложени, какви техни специфики да бъдат използвани или евентуални проблеми и слаби места да бъдат избегнати. Как всичко да се направи при най-оптимална цена и бързина на производство. Повечето от тези решения ви подсказва самата информационна система на фабриката, защото разполага с натрупани данни и статистика, а machine learning алгоритмите стават все по-добри. Съобразени са регулационните изисквания, ако продуктът има отношения към тях. Уточнени са доставчиците на материали, цените им, дори разполагате с прогноза как би се отразила върху вашата цена някоя промяна на пазарите на суровини и материали. Знаете кои и какви са възможните най-подходящи заместители. Знаете подробно как ще се рециклира продуктът ви, след като приключи експлотационният му период. Имате най-детайлни количествени сметки и при всяка промяна в изделието те се опресняват автоматично. Накрая слагате очила за виртуална реалност и разглеждате продукта си „на живо“ още преди изобщо той да е влязъл в производство.

Ако всичко е както трябва – пускате поръчката за продукция, фабриката се преконфигурира съобразно новото изделие, а това с всяка следваща година ще се случва все по-автоматично и бързо. Транспортни роботи ще зареждат от склад нужните материали, колаборативните им „колеги“ (коботи) по поточните линии ще обслужват машините и производството на новото изделие (сменяйки например изхабените инструменти, зареждайки заготовки, опаковайки и т.н.), а накрая, вече в склада за готова продукция, ще финишират подредени, с етикети или маркирани с RFID, NFC или QR бройките завършени изделия, готови за експедиция. Таговете на съответната маркировка вероятно ще са в blockchain база, за проследяемост и доказване на оригинално производство и произход. Хора все още ще са нужни, но повечето ще са в инженерния отдел или в, да го наречем, контролния център на фабриката. Сред машините ще бъдат все по-малко и по-малко. А подобно производството за все повече неща ще е все по-възможно да бъде дори денонощно.

По време на целия процес ще можете да следите бройките, темпото, разхода на суровини и енергия – и не в някакви ужасни таблици с мърдащи числа, а чрез удобни, човешки интерфейси и визуализации, включително отдалечено чрез смартфон или таблет, защото всъщност няма какво толкова да правите във фабриката… Системите за анализиране на данните от датчиците и следене на параметрите на машините ще „предвиждат“ и подсказват за евентуални претоварвания на машини и инструменти, за потенциални проблеми и брак…

„Мечтая да е възможно фабриката ми да е продължение на нервната ми система, да мога да я почувствам и да взимам решения на тази база“, ми каза наскоро един индустриалец.

И нека тук спрем да си представяме, защото… това изобщо не е фантастика. Технологиите, които са нужни това да бъде възможно, са вече около нас. В индустрията обаче има много препъни камъни, заложени основно от различни вендори, в стремеж да запазят пазарни сегменти за себе си. Много машини не споделят никакви данни. Често липсват стандарти (или пък те не се ползват), които да спомагат интеграцията между компоненти от различни производители. Доставчиците на информационни системи се опитват да продават това, което вече са разработили, и твърде малко се интересуват от реалните нужди и проблеми в производствения сектор. След десетилетия, в които се извършва автоматизация и цифровизация (да, формално третата индустриална революция започва през 70-те години на миналия век), информационните технологии продължават да са фокусирани предимно в enablement и поддръжка, и твърде малко в реални, практични и ползотворни, специфични за сегмента иновации. В света на ИТ продължава да вирее горделивото, но безпочвено очакване индустрията да се напасне към тях, вместо технологиите да са пригодни за индустрията.

Но в крайна сметка иновациите се случват – при това лавинообразно. Някои са по-значими от други или пък изчакват своя момент да заблестят. И това съвсем не е самоцелно или извън контекст.

Такава модерна, дигитална фабрика е напълно възможна. От днешна гледна точка е по-лесно да си я представим като плавна еволюция на съществуваща традиционна такава, отколкото да се планира и построи от нулата. И при това не са нужни твърде много време, твърде много хора или някакви огромни усилия. Трябва да се огледат и оценят налични платформи и технологии заедно с възможните интеграции помежду им. Да се отсеят тези с най-добър потенциал и да се тестват в реална среда. Да се подберат или доработят приложения. И нещата ще започнат да се получават – не изведнъж, но всичко това е напълно реалистично – и ако се стартира сега, да започнат да се виждат реални резултати между 2020 и 2025 година. При това в България, в Пловдив, в Търново или Габрово. Или навсякъде другаде, но преди няколко дни с един от визионерите в индустрията около Пловдив обсъждахме, че ако успеем да го направим тук, значи може да се направи навсякъде. И да се мултиплексира колкото е нужно.

Нужни са няколко души с капацитет и готовност да се фокусират в темата – в идеалния случай ще са хора, малко по-широкопрофилни като натрупан опит, които не се страхуват да мислят извън рамките на текущата си тясна специалност. Такива, които да са с нагласа бързо да скачат и навлизат в нова територия, софтуер или платформа. Да имат способността да вникват под повърхността на техническата документация, за да преценят потенциала на една система. Важен е интеграторският подход – погледът към общата картинка и крайната цел всичко да работи заедно. Потенциалът на една машина или система извън контекста на общата интеграция, ако тя е твърде трудна, скъпа или невъзможна, е с пренебрежимо значение. Всякакъв ИТ опит ще е от полза, но в идеалния случай комбинацията от ИТ и инженерство (електроника и/или автоматизация) ще е брилянтната сплав. Иначе всяко окей по някоя (или повече) от следните посоки ще е плюс:

  • аналитично и критично мислене (креативно и out-of-the-box)
  • някой от стандартните езици за програмиране като C или Java
  • някоя и друга база-данни (поне SQL), както и HTML, и REST
  • опит с програмиране на контролери
  • IoT или IIoT (Industrial IoT), MQTT (или други M2M протоколи)
  • комфортно ползване на различни операционни системи (минимум Linux и Windows)
  • PLM (product lifecycle management), но не PLCM (product life-cycle management в маркетинг смисъл)
  • machine learning
  • всякакъв друг опит в/от индустрията
  • умения за кратко и фокусирано изразяване и писане
  • готовност за споделяне на знания и работа в екип
  • прагматичен и практически-ориентиран подход към проблемите
  • използване на инструменти за управление на задачи, проекти и лични ангажименти (календар, trello, todoist и др.)

Английският език е нужен като минимум на работно ниво, защото макар и не непрекъснато, ще се работи в многоезична среда и този език се явява най-малкото общо кратно за общуване и документиране.

И ако това по-горе зазвучи претенциозно в нечии уши, бързам да уточня, че няма да правим никакви революции и иновации – просто ще свършим малко полезна работа. В идеалния случай просто ще побутнем леко еволюцията напред. 🙂

Такива неща вече се правят в една или друга степен. Започват да се появяват и вендори, които ще твърдят, че могат да продадат най-подходящото цялостно решение. Истината е, че много от тях просто се опитват да вкарат колкото могат повече клиенти в собствената си екосистема. Няма универсални решения – във всяко индустриално производство има много специфики, които трябва да се имат наум. Решенията е добре да са „ушити“ по мярка и интегрирани, с подбор на най-оптималните компоненти от различни вендори.

Затова размахвам знаменце, че търся колеги и партньори. Имам подадени ръце и готовност за съвместна работа с представители на индустрията около Пловдив (в момента ми е най-лесно и на мен, и на тях, да разсъждаваме в рамките на Пловдив). Основната тяхна тревога е, че няма да се намерят нужните хора. Затова искам да проведа като начало един тур от разговори с тези, които биха се заинтересували да работим заедно по темата, а след това ще обсъждаме следващи стъпки. Имам няколко души наум, с които ще говоря лично, но по-голямата част от тях са ангажирани с други неща. Затова, пишейки този текст, се надявам, че ще се намерим и с нови колеги.

Аз лично вярвам, че успехът е свързан с партньорство и колаборация, а не е заключен в ревнивото пазене на двадесетте реда код, които си написал така или иначе пак като импровизация върху нечий друг труд, положен преди теб. Знанието, затворено между двете ти уши, просто остарява бързо и не върши никаква работа, ако не си го споделил с останалите. Времето на парцелираното познание и ексклузивни умения приключи неотдавна. Сега можем да постигнем нещо смислено само чрез взаимни усилия и екипна работа.

В случай че проявявате интерес, моля свържете се с мен и споделете каквото прецените за важно за ваш предишен опит или нагласа към темата. С най-интересните хора ще се постарая да се видим на живо възможно най-скоро.

Да побутнем бъдещето

Post Syndicated from Йовко Ламбрев original https://yovko.net/push-the-future/

Да побутнем бъдещето

Представете си, че сме няколко години напред в бъдещето. Но не повече от пръстите на едната ви ръка. Искате да произведете някакъв продукт. Влизате във фабриката с проектната си документация, записана на някакъв цифров носител… Или не – дори нямате съвсем прецизна такава, но с помощта на инженерния екип тя скоро става готова, като междувременно сте изяснили всички въпроси, свързани с това какви точно материали да бъдат вложени, какви техни специфики да бъдат използвани или евентуални проблеми и слаби места да бъдат избегнати. Как всичко да се направи при най-оптимална цена и бързина на производство. Повечето от тези решения ви подсказва самата информационна система на фабриката, защото разполага с натрупани данни и статистика, а machine learning алгоритмите стават все по-добри. Съобразени са регулационните изисквания, ако продуктът има отношения към тях. Уточнени са доставчиците на материали, цените им, дори разполагате с прогноза как би се отразила върху вашата цена някоя промяна на пазарите на суровини и материали. Знаете кои и какви са възможните най-подходящи заместители. Знаете подробно как ще се рециклира продуктът ви, след като приключи експлотационният му период. Имате най-детайлни количествени сметки и при всяка промяна в изделието те се опресняват автоматично. Накрая слагате очила за виртуална реалност и разглеждате продукта си „на живо“ още преди изобщо той да е влязъл в производство.

Ако всичко е както трябва – пускате поръчката за продукция, фабриката се преконфигурира съобразно новото изделие, а това с всяка следваща година ще се случва все по-автоматично и бързо. Транспортни роботи ще зареждат от склад нужните материали, колаборативните им „колеги“ (коботи) по поточните линии ще обслужват машините и производството на новото изделие (сменяйки например изхабените инструменти, зареждайки заготовки, опаковайки и т.н.), а накрая, вече в склада за готова продукция, ще финишират подредени, с етикети или маркирани с RFID, NFC или QR бройките завършени изделия, готови за експедиция. Таговете на съответната маркировка вероятно ще са в blockchain база, за проследяемост и доказване на оригинално производство и произход. Хора все още ще са нужни, но повечето ще са в инженерния отдел или в, да го наречем, контролния център на фабриката. Сред машините ще бъдат все по-малко и по-малко. А подобно производството за все повече неща ще е все по-възможно да бъде дори денонощно.

По време на целия процес ще можете да следите бройките, темпото, разхода на суровини и енергия – и не в някакви ужасни таблици с мърдащи числа, а чрез удобни, човешки интерфейси и визуализации, включително отдалечено чрез смартфон или таблет, защото всъщност няма какво толкова да правите във фабриката… Системите за анализиране на данните от датчиците и следене на параметрите на машините ще „предвиждат“ и подсказват за евентуални претоварвания на машини и инструменти, за потенциални проблеми и брак…

„Мечтая да е възможно фабриката ми да е продължение на нервната ми система, да мога да я почувствам и да взимам решения на тази база“, ми каза наскоро един индустриалец.

И нека тук спрем да си представяме, защото… това изобщо не е фантастика. Технологиите, които са нужни това да бъде възможно, са вече около нас. В индустрията обаче има много препъни камъни, заложени основно от различни вендори, в стремеж да запазят пазарни сегменти за себе си. Много машини не споделят никакви данни. Често липсват стандарти (или пък те не се ползват), които да спомагат интеграцията между компоненти от различни производители. Доставчиците на информационни системи се опитват да продават това, което вече са разработили, и твърде малко се интересуват от реалните нужди и проблеми в производствения сектор. След десетилетия, в които се извършва автоматизация и цифровизация (да, формално третата индустриална революция започва през 70-те години на миналия век), информационните технологии продължават да са фокусирани предимно в enablement и поддръжка, и твърде малко в реални, практични и ползотворни, специфични за сегмента иновации. В света на ИТ продължава да вирее горделивото, но безпочвено очакване индустрията да се напасне към тях, вместо технологиите да са пригодни за индустрията.

Но в крайна сметка иновациите се случват – при това лавинообразно. Някои са по-значими от други или пък изчакват своя момент да заблестят. И това съвсем не е самоцелно или извън контекст.

Такава модерна, дигитална фабрика е напълно възможна. От днешна гледна точка е по-лесно да си я представим като плавна еволюция на съществуваща традиционна такава, отколкото да се планира и построи от нулата. И при това не са нужни твърде много време, твърде много хора или някакви огромни усилия. Трябва да се огледат и оценят налични платформи и технологии заедно с възможните интеграции помежду им. Да се отсеят тези с най-добър потенциал и да се тестват в реална среда. Да се подберат или доработят приложения. И нещата ще започнат да се получават – не изведнъж, но всичко това е напълно реалистично – и ако се стартира сега, да започнат да се виждат реални резултати между 2020 и 2025 година. При това в България, в Пловдив, в Търново или Габрово. Или навсякъде другаде, но преди няколко дни с един от визионерите в индустрията около Пловдив обсъждахме, че ако успеем да го направим тук, значи може да се направи навсякъде. И да се мултиплексира колкото е нужно.

Нужни са няколко души с капацитет и готовност да се фокусират в темата – в идеалния случай ще са хора, малко по-широкопрофилни като натрупан опит, които не се страхуват да мислят извън рамките на текущата си тясна специалност. Такива, които да са с нагласа бързо да скачат и навлизат в нова територия, софтуер или платформа. Да имат способността да вникват под повърхността на техническата документация, за да преценят потенциала на една система. Важен е интеграторският подход – погледът към общата картинка и крайната цел всичко да работи заедно. Потенциалът на една машина или система извън контекста на общата интеграция, ако тя е твърде трудна, скъпа или невъзможна, е с пренебрежимо значение. Всякакъв ИТ опит ще е от полза, но в идеалния случай комбинацията от ИТ и инженерство (електроника и/или автоматизация) ще е брилянтната сплав. Иначе всяко окей по някоя (или повече) от следните посоки ще е плюс:

  • аналитично и критично мислене (креативно и out-of-the-box)
  • някой от стандартните езици за програмиране като C или Java
  • някоя и друга база-данни (поне SQL), както и HTML, и REST
  • опит с програмиране на контролери
  • IoT или IIoT (Industrial IoT), MQTT (или други M2M протоколи)
  • комфортно ползване на различни операционни системи (минимум Linux и Windows)
  • PLM (product lifecycle management), но не PLCM (product life-cycle management в маркетинг смисъл)
  • machine learning
  • всякакъв друг опит в/от индустрията
  • умения за кратко и фокусирано изразяване и писане
  • готовност за споделяне на знания и работа в екип
  • прагматичен и практически-ориентиран подход към проблемите
  • използване на инструменти за управление на задачи, проекти и лични ангажименти (календар, trello, todoist и др.)

Английският език е нужен като минимум на работно ниво, защото макар и не непрекъснато, ще се работи в многоезична среда и този език се явява най-малкото общо кратно за общуване и документиране.

И ако това по-горе зазвучи претенциозно в нечии уши, бързам да уточня, че няма да правим никакви революции и иновации – просто ще свършим малко полезна работа. В идеалния случай просто ще побутнем леко еволюцията напред. 🙂

Такива неща вече се правят в една или друга степен. Започват да се появяват и вендори, които ще твърдят, че могат да продадат най-подходящото цялостно решение. Истината е, че много от тях просто се опитват да вкарат колкото могат повече клиенти в собствената си екосистема. Няма универсални решения – във всяко индустриално производство има много специфики, които трябва да се имат наум. Решенията е добре да са „ушити“ по мярка и интегрирани, с подбор на най-оптималните компоненти от различни вендори.

Затова размахвам знаменце, че търся колеги и партньори. Имам подадени ръце и готовност за съвместна работа с представители на индустрията около Пловдив (в момента ми е най-лесно и на мен, и на тях, да разсъждаваме в рамките на Пловдив). Основната тяхна тревога е, че няма да се намерят нужните хора. Затова искам да проведа като начало един тур от разговори с тези, които биха се заинтересували да работим заедно по темата, а след това ще обсъждаме следващи стъпки. Имам няколко души наум, с които ще говоря лично, но по-голямата част от тях са ангажирани с други неща. Затова, пишейки този текст, се надявам, че ще се намерим и с нови колеги.

Аз лично вярвам, че успехът е свързан с партньорство и колаборация, а не е заключен в ревнивото пазене на двадесетте реда код, които си написал така или иначе пак като импровизация върху нечий друг труд, положен преди теб. Знанието, затворено между двете ти уши, просто остарява бързо и не върши никаква работа, ако не си го споделил с останалите. Времето на парцелираното познание и ексклузивни умения приключи неотдавна. Сега можем да постигнем нещо смислено само чрез взаимни усилия и екипна работа.

В случай че проявявате интерес, моля свържете се с мен и споделете каквото прецените за важно за ваш предишен опит или нагласа към темата. С най-интересните хора ще се постарая да се видим на живо възможно най-скоро.