Tag Archives: Security Software

APT-Hunter – Threat Hunting Tool via Windows Event Log

Post Syndicated from Darknet original https://www.darknet.org.uk/2021/03/apt-hunter-threat-hunting-tool-via-windows-event-log/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

APT-Hunter – Threat Hunting Tool via Windows Event Log

APT-Hunter is a threat hunting tool for windows event logs made from the perspective of the purple team mindset to provide detection for APT movements hidden in the sea of windows event logs.

This will help you to decrease the time to uncover suspicious activity and the tool will make good use of the windows event logs collected and make sure to not miss critical events configured to be detected.

The target audience for APT-Hunter is threat hunters, incident response professionals or forensic investigators.

Read the rest of APT-Hunter – Threat Hunting Tool via Windows Event Log now! Only available at Darknet.

OWASP APICheck – HTTP API DevSecOps Toolset

Post Syndicated from Darknet original https://www.darknet.org.uk/2020/10/owasp-apicheck-http-api-devsecops-toolset/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

OWASP APICheck – HTTP API DevSecOps Toolset

APICheck is an HTTP API DevSecOps toolset, it integrates existing HTTP APIs tools, creates execution chains easily and is designed for integration with third-party tools in mind.

APICheck is comprised of a set of tools that can be connected to each other to achieve different functionalities, depending on how they are connected. It allows you to create execution chains and it can not only integrate self-developed tools but also can leverage existing tools in order to take advantage of them to provide new functionality.

Read the rest of OWASP APICheck – HTTP API DevSecOps Toolset now! Only available at Darknet.

Pingcastle – Active Directory Security Assessment Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2020/05/pingcastle-active-directory-security-assessment-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Pingcastle – Active Directory Security Assessment Tool

PingCastle is a Active Directory Security Assessment Tool designed to quickly assess the Active Directory security level with a methodology based on a risk assessment and maturity framework. It does not aim at a perfect evaluation but rather as an efficiency compromise.

The risk level regarding Active Directory security has changed. Several vulnerabilities have been made popular with tools like mimikatz or sites likes adsecurity.org.

CMMI is a well known methodology from the Carnegie Mellon university to evaluate the maturity with a grade from 1 to 5, PingCastle has adapated CMMI to Active Directory security.

Read the rest of Pingcastle – Active Directory Security Assessment Tool now! Only available at Darknet.

Ransomware Update: Viruses Targeting Business IT Servers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ransomware-update-viruses-targeting-business-it-servers/

Ransomware warning message on computer

As ransomware attacks have grown in number in recent months, the tactics and attack vectors also have evolved. While the primary method of attack used to be to target individual computer users within organizations with phishing emails and infected attachments, we’re increasingly seeing attacks that target weaknesses in businesses’ IT infrastructure.

How Ransomware Attacks Typically Work

In our previous posts on ransomware, we described the common vehicles used by hackers to infect organizations with ransomware viruses. Most often, downloaders distribute trojan horses through malicious downloads and spam emails. The emails contain a variety of file attachments, which if opened, will download and run one of the many ransomware variants. Once a user’s computer is infected with a malicious downloader, it will retrieve additional malware, which frequently includes crypto-ransomware. After the files have been encrypted, a ransom payment is demanded of the victim in order to decrypt the files.

What’s Changed With the Latest Ransomware Attacks?

In 2016, a customized ransomware strain called SamSam began attacking the servers in primarily health care institutions. SamSam, unlike more conventional ransomware, is not delivered through downloads or phishing emails. Instead, the attackers behind SamSam use tools to identify unpatched servers running Red Hat’s JBoss enterprise products. Once the attackers have successfully gained entry into one of these servers by exploiting vulnerabilities in JBoss, they use other freely available tools and scripts to collect credentials and gather information on networked computers. Then they deploy their ransomware to encrypt files on these systems before demanding a ransom. Gaining entry to an organization through its IT center rather than its endpoints makes this approach scalable and especially unsettling.

SamSam’s methodology is to scour the Internet searching for accessible and vulnerable JBoss application servers, especially ones used by hospitals. It’s not unlike a burglar rattling doorknobs in a neighborhood to find unlocked homes. When SamSam finds an unlocked home (unpatched server), the software infiltrates the system. It is then free to spread across the company’s network by stealing passwords. As it transverses the network and systems, it encrypts files, preventing access until the victims pay the hackers a ransom, typically between $10,000 and $15,000. The low ransom amount has encouraged some victimized organizations to pay the ransom rather than incur the downtime required to wipe and reinitialize their IT systems.

The success of SamSam is due to its effectiveness rather than its sophistication. SamSam can enter and transverse a network without human intervention. Some organizations are learning too late that securing internet-facing services in their data center from attack is just as important as securing endpoints.

The typical steps in a SamSam ransomware attack are:

1
Attackers gain access to vulnerable server
Attackers exploit vulnerable software or weak/stolen credentials.
2
Attack spreads via remote access tools
Attackers harvest credentials, create SOCKS proxies to tunnel traffic, and abuse RDP to install SamSam on more computers in the network.
3
Ransomware payload deployed
Attackers run batch scripts to execute ransomware on compromised machines.
4
Ransomware demand delivered requiring payment to decrypt files
Demand amounts vary from victim to victim. Relatively low ransom amounts appear to be designed to encourage quick payment decisions.

What all the organizations successfully exploited by SamSam have in common is that they were running unpatched servers that made them vulnerable to SamSam. Some organizations had their endpoints and servers backed up, while others did not. Some of those without backups they could use to recover their systems chose to pay the ransom money.

Timeline of SamSam History and Exploits

Since its appearance in 2016, SamSam has been in the news with many successful incursions into healthcare, business, and government institutions.

March 2016
SamSam appears

SamSam campaign targets vulnerable JBoss servers
Attackers hone in on healthcare organizations specifically, as they’re more likely to have unpatched JBoss machines.

April 2016
SamSam finds new targets

SamSam begins targeting schools and government.
After initial success targeting healthcare, attackers branch out to other sectors.

April 2017
New tactics include RDP

Attackers shift to targeting organizations with exposed RDP connections, and maintain focus on healthcare.
An attack on Erie County Medical Center costs the hospital $10 million over three months of recovery.
Erie County Medical Center attacked by SamSam ransomware virus

January 2018
Municipalities attacked

• Attack on Municipality of Farmington, NM.
• Attack on Hancock Health.
Hancock Regional Hospital notice following SamSam attack
• Attack on Adams Memorial Hospital
• Attack on Allscripts (Electronic Health Records), which includes 180,000 physicians, 2,500 hospitals, and 7.2 million patients’ health records.

February 2018
Attack volume increases

• Attack on Davidson County, NC.
• Attack on Colorado Department of Transportation.
SamSam virus notification

March 2018
SamSam shuts down Atlanta

• Second attack on Colorado Department of Transportation.
• City of Atlanta suffers a devastating attack by SamSam.
The attack has far-reaching impacts — crippling the court system, keeping residents from paying their water bills, limiting vital communications like sewer infrastructure requests, and pushing the Atlanta Police Department to file paper reports.
Atlanta Ransomware outage alert
• SamSam campaign nets $325,000 in 4 weeks.
Infections spike as attackers launch new campaigns. Healthcare and government organizations are once again the primary targets.

How to Defend Against SamSam and Other Ransomware Attacks

The best way to respond to a ransomware attack is to avoid having one in the first place. If you are attacked, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or none if you ever suffer an attack.

In our previous post, How to Recover From Ransomware, we listed the ten ways to protect your organization from ransomware.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as disconnected external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to patch early and patch often to close known vulnerabilities in operating systems, server software, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

Please Tell Us About Your Experiences with Ransomware

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please tell us of your experiences in the comments.

The post Ransomware Update: Viruses Targeting Business IT Servers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

You don’t need printer security

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/you-dont-need-printer-security.html

So there’s this tweet:

What it’s probably refering to is this:

This is an obviously bad idea.

Well, not so “obvious”, so some people have ask me to clarify the situation. After all, without “security”, couldn’t a printer just be added to a botnet of IoT devices?

The answer is this:

Fixing insecurity is almost always better than adding a layer of security.

Adding security is notoriously problematic, for three reasons

  1. Hackers are active attackers. When presented with a barrier in front of an insecurity, they’ll often find ways around that barrier. It’s a common problem with “web application firewalls”, for example.
  2. The security software itself can become a source of vulnerabilities hackers can attack, which has happened frequently in anti-virus and intrusion prevention systems.
  3. Security features are usually snake-oil, sounding great on paper, with with no details, and no independent evaluation, provided to the public.

It’s the last one that’s most important. HP markets features, but there’s no guarantee they work. In particular, similar features in other products have proven not to work in the past.

HP describes its three special features in a brief whitepaper [*]. They aren’t bad, but at the same time, they aren’t particularly good. Windows already offers all these features. Indeed, as far as I know, they are just using Windows as their firmware operating system, and are just slapping an “HP” marketing name onto existing Windows functionality.

HP Sure Start: This refers to the standard feature in almost all devices these days of having a secure boot process. Windows supports this in UEFI boot. Apple’s iPhones work this way, which is why the FBI needed Apple’s help to break into a captured terrorist’s phone. It’s a feature built into most IoT hardware, though most don’t enable it in software.

Whitelisting: Their description sounds like “signed firmware updates”, but if that was they case, they’d call it that. Traditionally, “whitelisting” referred to a different feature, containing a list of hashes for programs that can run on the device. Either way, it’s a pretty common functionality.

Run-time intrusion detection: They have numerous, conflicting descriptions on their website. It may mean scanning memory for signatures of known viruses. It may mean stack cookies. It may mean double-checking kernel modules. Windows does all these things, and it has a tiny benefit on stopping security threats.

As for traditional threats for attacks against printers, none of these really are important. What you need to secure a printer is the ability to disable services you aren’t using (close ports), enable passwords and other access control, and delete files of old print jobs so hackers can’t grab them from the printer. HP has features to address these security problems, but then, so do its competitors.

Lastly, printers should be behind firewalls, not only protected from the Internet, but also segmented from the corporate network, so that only those designed ports, or flows between the printer and print servers, are enabled.

Conclusion

The features HP describes are snake oil. If they worked well, they’d still only address a small part of the spectrum of attacks against printers. And, since there’s no technical details or independent evaluation of the features, they are almost certainly lies.

If HP really cared about security, they’d make their software more secure. They use fuzzing tools like AFL to secure it. They’d enable ASLR and stack cookies. They’d compile C code with run-time buffer overflow checks. Thety’d have a bug bounty program. It’s not something they can easily market, but at least it’d be real.

If you cared about printer security, then do the steps I outline above, especially firewalling printers from the traditional network. Seriously, putting $100 firewall between a VLAN for your printers and the rest of the network is cheap and easy way to do a vast amount of security. If you can’t secure printers this way, buying snake oil features like HP describes won’t help you.

Credential Stealing as an Attack Vector

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/credential_stea.html

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group — basically the country’s chief hacker — gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and — most importantly — incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

Sencha ExtJS grid update in real time from the back-end

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/sencha-extjs-grid-update-in-real-time.html

Hello to all,I love using Sencha ExtJS in some projects as it is the most complete JavaScript UI framework, even though it is kind of slow, not fast reacting and being cpu and memory expensive. ExtJS allows you to do very fast and lazy development of otherwise complex UI and especially if you use Sencha Architect you can minimize the UI development time focusing only on the important things of your code.However, ExtJS has quite few draw backs – missing features or some things are over complex and hard to be kept in mind by inexperienced developer (like their Controller idea). Here I would like to show you a little example how you can implement a very simple real time update of Sencha Grids (tables) from the backend for an multi user application.Why do you need this?I often develop apps that has to be used by multiple persons at the same time and they share and modify the same data.In such situation, a developer usually has to resolve all those conflicting cases where two users try to modify the same exact data. And Sencha ExtJS grids are not very helpful here. Sencha uses the concept of Store that interact with the data of the back-end (for example by using REST API) and then the Store is assigned to a visualization object like ComboBox or a Grid (Tables). If you modify a table (with the help of Cell Edit Plugin or Row Edit Plugin) that has autoSync property set to true, then any modification you do automatically generates a REST POST/PUT/DELETE query to inform the back end. It can never be easier for a developer, right? But all the data sent to the back end contains the whole modified row – all the properties. On a first sight, this is not an issue. But it is, if you have multiple users editing the same table at the same time. The problem happens because the Sencha Store caches the data. So if User1 modifies it – it is stored on the server. But if User2 modifies the same row but a different column, it will do that over the old data and can overwrite the User1 modification. The backend cannot know which property has been modified and which not and who of the two modifications has to be kept.There are a lot of tricks a developer usually use to avoid this conflicts. Keeping a version of the modification with each data row in the server, which is received in GET by the UI clients. So when a modification happens, it is accepted only if the client sends the same version number as the one stored in the server, and then the version in the server increases. If another one modification is received with older cached data, it will not be accepted as it will have a different version number. Then the customer will receive an error, then the UI software may refresh its data and updates the versions and the content visualized to the user. This is quite popular model, but it is not very nice for the user. The problem is that with multiple users working with the application modifying the same data over the same time, the user will constantly be outdated and will constantly receive errors loosing all its modifications.The only good solution for both users and the system in general is if in case of change we can update the data in real time in all UI applications. This does not avoid all the possibilities for conflict. But it is highly minimizing it, making the whole operation more pleasant for the end user.This problem and the need of resolving it happens quite often. Google Spreadsheet and later Google Docs has introduced real time update between the UI data of all the users modifying the same document about 4 years ago.ExampleI like to show here that it is not really hard to update in real time the Stores of ExtJS applications.It actually requires very little additional code.Lets imaging we are using a UI developed in Sencha ExtJS with Stores communicating through REST with the backend. The backend for this example will be Node.JS and MongoDB.Between the Node.JS and the Ext.JS UI there will be Socket.IO session that we will use to push the updates from the Node.JS to the ExtJS Store. I love Socket.IO because it provides a simple WebSockets interface with fallback to HTTP pooling model in case of WebSockets cannot be open (which happens a lot, if you are so unlucky to use a Microsoft security software for example – it blocks WebSockets). At the MongoDB we may use capped collections. I love capped collections – they are not only limited in size, but also they allow you to bind a triggers (make the collection tailable) that will receive any new insertion immediately when it happen.So imagine your Node.JS express REST code looks something like this:app.get(‘/rest/myrest’,restGetMyrest);app.put(‘/rest/myrest/:id’,restPutMyrest);app.post(‘/rest/myrest/:id’,restPostMyrest);app.del(‘/rest/myrest/:id’,restDelMyrest);function restGetMyrest(req,res) { // READ REST method   db.collection(‘myrest’).find().toArray(function(err,q) { return res.send(200,q) })}function restPutMyrest(req,res) { // UPDATE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).findAndModify({ _id: id }, [[‘_id’:’asc’]], { $set: req.body }, { safe: true, ‘new’: true }, function(err,q) {      if (err || (!q)) return res.send(500);      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘update’, data: q }, function() {});      return res.send(200,q);  })}function restPostMyrest(req,res) { // CREATE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).insert({ _id: id },req.body, { safe: true }, function(err,q) {      if (err || (!q)) return res.send(500);      setTimeout(function() {         db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘create’, data: q[0] }, function() {});      },250);      return res.send(200,q);  })}function restDelMyrest(req,res) { // DELETE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).remove({ _id: id }, { $set: req.body }, { safe: true }, function(err,q) {      if (err || (!q)) return res.send(500);      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘delete’, data: { _id: id } }, function() {});      return res.send(201,{});  })}As you can see above – we have implemented a classical CRUD REST method named “myrest” retrieving and storing data in a mongodb collection named ‘myrest’. However, with all modification we also store that modification in a mongodb capped collection named “capDb”.We use this capped collection (in bold) as an internal mechanism for communication within the NodeDB. You can use events instead, or you can directly send this message to the Socket.IO receiver. However, I like capped db, as they set a lot of advantages – there can be multiple Node.JS processes listening on a capped db and receiving the updates simultaneously. So it is easier to implement clusters that way, including notifying Node.JS processes distributed over different machines.So now, may be in another file or anywhere else, you may have a simple Node.JS Socket.IO code looking like this:var s = sIo.of(‘/updates’);db.createCollection(“capDb”, { capped: true, size: 100000 }, function (err, col) {   var stream = col.find({},{ tailable: true, awaitdata: true, numberOfRetries: -1 }).stream();   stream.on(‘data’,function(doc) {       s.emit(doc.op,doc);   }}); With this little code above we are basically broadcasting to everyone connected with Socket.IO to /updates the content of the last insertion in the tailable capDb. Also we are creating this collection, if it does not exists from before.This is everything you need in Node.JS :)Now we can get back to the Ext.JS code. Simply you need to have somewhere in your HTML application this code executed:var socket = io.connect(‘/updates’);socket.on(‘create’, function(msg) {   var s = Ext.StoreMgr.get(msg.method);   if ((!s)||(s.getCount()>s.pageSize||s.findRecord(‘id’,msg.data._id)) return;   s.suspendAutoSync();   s.add(msg.data);   s.commitChanges();   s.resumeAutoSync();});socket.on(‘update’, function(msg) {   var s = Ext.StoreMgr.get(msg.method);   var r;   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;   s.suspendAutoSync();   for (var k in msg.data) if (r.get(k) != msg.data[k]) r.set(k,msg.data[k]);   s.commitChanges();   s.resumeAutoSync();});socket.on(‘delete’,function(msg) {   var s = Ext.StoreMgr.get(msg.method);   var r;   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;   s.suspendAutoSync();   s.remove(r);   s.commitChanges();   s.resumeAutoSync();});This is all.Basically what we do from end to end -If the Node.JS receives any CRUD REST operation it updates the data in the MongoDB, but also for Create, Update, Delete it notify over Socket.IO all the listening web clients about this operation (in my example, I use tailable capped collection in MongoDB as a an internal messaging bus, but you can emit to the Socket.IO directly or use another messaging bus like EventEmitter).Then the ExtJS receives the update over Socket.IO and assumes that the method property contains the name of the Store that has to be updated. Then we find the store, suspedAutoSync if it exists (otherwise we can get into update->autosync->rest->update loop), modify the content of the record (or the store) and resume AutoSync.With this simple code you can broadcast all the modifications in your data between all the extjs users that are currently online, so they can see updates in real time in their grids.A single REST method may be used by multiple stores. In such case, you have to modify your code with some association between the REST method name and all the related stores.However, for this simple example, that is unnecessary.Some other day, I may show you my “ExtJS WebSockets CRUD proxy” I made, where you have only one communication channel between the stores and the backend – Socket.IO. It is much faster and removes the need of having REST code at all in your server. 

Sencha ExtJS grid update in real time from the back-end

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/10/sencha-extjs-grid-update-in-real-time.html

Hello to all,I love using Sencha ExtJS in some projects as it is the most complete JavaScript UI framework, even though it is kind of slow, not fast reacting and being cpu and memory expensive. ExtJS allows you to do very fast and lazy development of otherwise complex UI and especially if you use Sencha Architect you can minimize the UI development time focusing only on the important things of your code.However, ExtJS has quite few draw backs – missing features or some things are over complex and hard to be kept in mind by inexperienced developer (like their Controller idea). Here I would like to show you a little example how you can implement a very simple real time update of Sencha Grids (tables) from the backend for an multi user application.Why do you need this?I often develop apps that has to be used by multiple persons at the same time and they share and modify the same data.In such situation, a developer usually has to resolve all those conflicting cases where two users try to modify the same exact data. And Sencha ExtJS grids are not very helpful here. Sencha uses the concept of Store that interact with the data of the back-end (for example by using REST API) and then the Store is assigned to a visualization object like ComboBox or a Grid (Tables). If you modify a table (with the help of Cell Edit Plugin or Row Edit Plugin) that has autoSync property set to true, then any modification you do automatically generates a REST POST/PUT/DELETE query to inform the back end. It can never be easier for a developer, right? But all the data sent to the back end contains the whole modified row – all the properties. On a first sight, this is not an issue. But it is, if you have multiple users editing the same table at the same time. The problem happens because the Sencha Store caches the data. So if User1 modifies it – it is stored on the server. But if User2 modifies the same row but a different column, it will do that over the old data and can overwrite the User1 modification. The backend cannot know which property has been modified and which not and who of the two modifications has to be kept.There are a lot of tricks a developer usually use to avoid this conflicts. Keeping a version of the modification with each data row in the server, which is received in GET by the UI clients. So when a modification happens, it is accepted only if the client sends the same version number as the one stored in the server, and then the version in the server increases. If another one modification is received with older cached data, it will not be accepted as it will have a different version number. Then the customer will receive an error, then the UI software may refresh its data and updates the versions and the content visualized to the user. This is quite popular model, but it is not very nice for the user. The problem is that with multiple users working with the application modifying the same data over the same time, the user will constantly be outdated and will constantly receive errors loosing all its modifications.The only good solution for both users and the system in general is if in case of change we can update the data in real time in all UI applications. This does not avoid all the possibilities for conflict. But it is highly minimizing it, making the whole operation more pleasant for the end user.This problem and the need of resolving it happens quite often. Google Spreadsheet and later Google Docs has introduced real time update between the UI data of all the users modifying the same document about 4 years ago.ExampleI like to show here that it is not really hard to update in real time the Stores of ExtJS applications.It actually requires very little additional code.Lets imaging we are using a UI developed in Sencha ExtJS with Stores communicating through REST with the backend. The backend for this example will be Node.JS and MongoDB.Between the Node.JS and the Ext.JS UI there will be Socket.IO session that we will use to push the updates from the Node.JS to the ExtJS Store. I love Socket.IO because it provides a simple WebSockets interface with fallback to HTTP pooling model in case of WebSockets cannot be open (which happens a lot, if you are so unlucky to use a Microsoft security software for example – it blocks WebSockets). At the MongoDB we may use capped collections. I love capped collections – they are not only limited in size, but also they allow you to bind a triggers (make the collection tailable) that will receive any new insertion immediately when it happen.So imagine your Node.JS express REST code looks something like this:app.get(‘/rest/myrest’,restGetMyrest);app.put(‘/rest/myrest/:id’,restPutMyrest);app.post(‘/rest/myrest/:id’,restPostMyrest);app.del(‘/rest/myrest/:id’,restDelMyrest);function restGetMyrest(req,res) { // READ REST method   db.collection(‘myrest’).find().toArray(function(err,q) { return res.send(200,q) })}function restPutMyrest(req,res) { // UPDATE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).findAndModify({ _id: id }, [[‘_id’:’asc’]], { $set: req.body }, { safe: true, ‘new’: true }, function(err,q) {      if (err || (!q)) return res.send(500);      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘update’, data: q }, function() {});      return res.send(200,q);  })}function restPostMyrest(req,res) { // CREATE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).insert({ _id: id },req.body, { safe: true }, function(err,q) {      if (err || (!q)) return res.send(500);      setTimeout(function() {         db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘create’, data: q[0] }, function() {});      },250);      return res.send(200,q);  })}function restDelMyrest(req,res) { // DELETE REST method  var id = ObjectID.createFromHexString(req.param(‘id’));  db.collection(‘myrest’).remove({ _id: id }, { $set: req.body }, { safe: true }, function(err,q) {      if (err || (!q)) return res.send(500);      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘delete’, data: { _id: id } }, function() {});      return res.send(201,{});  })}As you can see above – we have implemented a classical CRUD REST method named “myrest” retrieving and storing data in a mongodb collection named ‘myrest’. However, with all modification we also store that modification in a mongodb capped collection named “capDb”.We use this capped collection (in bold) as an internal mechanism for communication within the NodeDB. You can use events instead, or you can directly send this message to the Socket.IO receiver. However, I like capped db, as they set a lot of advantages – there can be multiple Node.JS processes listening on a capped db and receiving the updates simultaneously. So it is easier to implement clusters that way, including notifying Node.JS processes distributed over different machines.So now, may be in another file or anywhere else, you may have a simple Node.JS Socket.IO code looking like this:var s = sIo.of(‘/updates’);db.createCollection(“capDb”, { capped: true, size: 100000 }, function (err, col) {   var stream = col.find({},{ tailable: true, awaitdata: true, numberOfRetries: -1 }).stream();   stream.on(‘data’,function(doc) {       s.emit(doc.op,doc);   }}); With this little code above we are basically broadcasting to everyone connected with Socket.IO to /updates the content of the last insertion in the tailable capDb. Also we are creating this collection, if it does not exists from before.This is everything you need in Node.JS :)Now we can get back to the Ext.JS code. Simply you need to have somewhere in your HTML application this code executed:var socket = io.connect(‘/updates’);socket.on(‘create’, function(msg) {   var s = Ext.StoreMgr.get(msg.method);   if ((!s)||(s.getCount()>s.pageSize||s.findRecord(‘id’,msg.data._id)) return;   s.suspendAutoSync();   s.add(msg.data);   s.commitChanges();   s.resumeAutoSync();});socket.on(‘update’, function(msg) {   var s = Ext.StoreMgr.get(msg.method);   var r;   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;   s.suspendAutoSync();   for (var k in msg.data) if (r.get(k) != msg.data[k]) r.set(k,msg.data[k]);   s.commitChanges();   s.resumeAutoSync();});socket.on(‘delete’,function(msg) {   var s = Ext.StoreMgr.get(msg.method);   var r;   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;   s.suspendAutoSync();   s.remove(r);   s.commitChanges();   s.resumeAutoSync();});This is all.Basically what we do from end to end -If the Node.JS receives any CRUD REST operation it updates the data in the MongoDB, but also for Create, Update, Delete it notify over Socket.IO all the listening web clients about this operation (in my example, I use tailable capped collection in MongoDB as a an internal messaging bus, but you can emit to the Socket.IO directly or use another messaging bus like EventEmitter).Then the ExtJS receives the update over Socket.IO and assumes that the method property contains the name of the Store that has to be updated. Then we find the store, suspedAutoSync if it exists (otherwise we can get into update->autosync->rest->update loop), modify the content of the record (or the store) and resume AutoSync.With this simple code you can broadcast all the modifications in your data between all the extjs users that are currently online, so they can see updates in real time in their grids.A single REST method may be used by multiple stores. In such case, you have to modify your code with some association between the REST method name and all the related stores.However, for this simple example, that is unnecessary.Some other day, I may show you my “ExtJS WebSockets CRUD proxy” I made, where you have only one communication channel between the stores and the backend – Socket.IO. It is much faster and removes the need of having REST code at all in your server.