Tag Archives: Computing/Software

How to Protect Enterprise Systems with Cloud-Based Firewalls

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-protect-enterprise-systems-with-cloudbased-firewalls

In this webcast SANS analyst Kevin Garvey explores key features of cloud-based firewalls and how they differ from more traditional firewalls.

As organizations begin their migrations to the cloud, security needs to meet the evolving challenges. Cloud-based firewalls are a key part of those security plans. In this webcast SANS analyst Kevin Garvey explores key features of cloud-based firewalls and how they differ from more traditional firewalls, the ease with which organizations can manage firewalls in AWS, and advanced features of firewalls that are of significant value to users’ organizations.

Attendees will learn how:

  • Web filtering, network logging, intrusion detection and prevention systems, single sign-on and authentication support, and deep packet inspection function in a cloud-based environment
  • Easily they can manage firewalls through APIs, AWS CloudFormation and independent software vendors
  • Features such as behavioral threat detection and next-generation analytics can enhance the security that firewalls provide
  • A firewall can be deployed and advanced features enabled in an EC2 instance

Get Keysight’s Basic Instruments Flyer Featuring the New N6790 series Electronic Loads

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/learn-about-keysights-next-generation-n6790-series-electronic-loads

Get Keysight’s Basic Instruments Flyer Featuring the New N6790 series Electronic Loads

Power technology has evolved drastically for power sources while electronic load capability has lagged, negatively impacting production schedules, cost of test, and product quality. This Basic Instruments flyer highlights Keysight’s next generation electronic loads, allowing for a complete DC power conversion solution on the popular N6700 modular power system.

img

Revolutionize Your Design and Test Workflow

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/want-to-innovate-with-testops-learn-how

Revolutionize Your Design and Test Workflow

Agile software development profoundly transformed software development in the 1900s. Far more than a process; Agile created a new way to work.

Today, a similar transformation is happening in test and measurement. TestOps is an innovative approach to product design and test which improves workflow efficiency and speeds product time to market.

Learn more about TestOps and how to accelerate your product development workflow.

pic

AI Can Edit Photos With Zero Experience

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/ai-can-edit-photos-with-zero-experience

A new technique called Double-DIP deploys deep learning to polish images without prior training

Imagine showing a photo taken through a storefront window to someone who has never opened her eyes before, and asking her to point to what’s in the reflection and what’s in the store. To her, everything in the photo would just be a big jumble. Computers can perform image separations, but to do it well, they typically require handcrafted rules or many, many explicit demonstrations: here’s an image, and here are its component parts.

New research finds that a machine-learning algorithm given just one image can discover patterns that allow it to separate the parts you want from the parts you don’t. The multi-purpose method might someday benefit any area where computer vision is used, including forensics, wildlife observation, and artistic photo enhancement.

Many tasks in machine learning require massive amounts of training data, which is not always available. A team of Israeli researchers is exploring what they call “deep internal learning,” where software figures out the internal structure of a single image from scratch. Their new work builds on a recent advance from another group called DIP, or Deep Image Prior. (Spoiler: The new method is called Double-DIP.)

EMI step-by-step guide from Rohde & Schwarz- Download For Free

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/emi-stepbystep-guide-from-rohde-schwarz-download-for-free

Be able to discover & analyze EMI in a more systematic & methodical approach to solve your problems.

In our free step-by-step guide, we break down the whole EMI design test process into “Locate”, “Capture”, and “Analyze”. Download & learn more.

image

Tips and Tricks on How to Verify Control Loop Stability

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/tips-and-tricks-on-how-to-verifying-control-loop-stability

Register for our Application Note “Tips and Tricks on how to verify control loop stability”

The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention.

IMG

JumpStart Guide to Cloud-Based Firewalls in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/jumpstart-guide-to-cloudbased-firewalls-in-aws

In this webinar: SANS, Optiv, and AWS Marketplace will lead an in-depth exploration of the key issues to consider when choosing next-generation firewall/threat prevention solutions for integration into a cloud environment, as well as recommend a process for making that important decision.

In this webinar:

SANS, Optiv, and AWS Marketplace will lead an in-depth exploration of the key issues to consider when choosing next-generation firewall/threat prevention solutions for integration into a cloud environment, as well as recommend a process for making that important decision.

Attendees will learn:

·         How cloud design affects the selection and use of next-generation firewalls and threat protection capabilities

·         Needs and capabilities associated with firewalls and threat prevention capabilities, including intrusion prevention, antivirus, logging and alerting, event correlation, continuous dynamic updating of threat databases, and malware protection

·         Business, technical, and operational considerations for cloud-based firewall protection, including AWS-specific considerations and real-world success observations

 Key questions for potential vendors to determine which products are well-suited for integration and implementation in your AWS environment

GitHub Releases New Tools to Report Vulnerabilities

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/github-releases-new-tools-to-report-vulnerabilities

The new features came out the same day as a study that found many open-source projects lack a clear way to report security problems

For most software developers, importing code from third-party libraries is an easy way to add new functionalities to a program without building those features from scratch. But relying on open-source libraries can be risky, as hackers often target security vulnerabilities within them.

Given all this, it’s important for users of any library to be able to report potential security issues to the project’s owners, so such problems can be fixed before they’re exploited. But until recently, many projects on the online repository GitHub lacked a clear way for users to submit security reports.

“I think reporting is the first step needed,” says University of Waterloo assistant professor Meiyappan Nagappan. But, adds University of Michigan professor Atul Prakash, “if the reporting process isn’t simple and straightforward, that can discourage or delay security reporting. And that can have consequences.”

While working on another project in 2018, Nagappan and his team found it difficult to report a vulnerable version of Apache Struts, the open-source library hackers exploited to breach Equifax in 2017. They tried informing other GitHub projects with the same dependency through a combination of emailing project owners, opening issues, and submitting pull requests.

How to Build an Endpoint Security Strategy in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-build-an-endpoint-security-strategy-in-aws

Cloud endpoint security is increasingly relevant in today’s business world and is critical to successful cloud migrations. In this webinar, SANS and AWS Marketplace will discuss this evolution and what it means for your AWS environment.

Today’s cloud-based endpoint security solutions differ significantly from traditional on-premises solutions. Still considered a basic security requirement, endpoint security is the cornerstone of any successful cloud migration strategy.

In this webcast, SANS analyst Thomas Banasik identifies the top challenges businesses face when migrating to the cloud and walks through the process of protecting cloud assets by using a defense-in-depth architecture to create a readily deployable, fully integrated endpoint security strategy.

Attendees will learn:

  • Evaluate security, migration, scale, speed and complexity requirements
  • Implement key endpoint security capabilities, including integrated machine learning, EDR, UBA and DLP solutions
  • Deploy endpoint security agents and use a single pane of glass platform to increase visibility
  • Employ agentless monitoring for synchronized threat intelligence

Register for this webinar to be among the first to receive the associated whitepaper written by Thomas J. Banasik.

DeepMind Teaches AI Teamwork

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/deepmind-teaches-ai-teamwork

AIs that were given a “social” drive and rewarded for influence learned to cooperate

The U.S. women’s soccer team has been showing a commanding World Cup performance in France. What would it take for a group of robotic players to show such skill (besides agility and large batteries)? For one, teamwork. But coordination in even simple games has been difficult for artificial intelligence to learn without explicit programming. New research takes a step in the right direction, showing that when virtual players are rewarded for social influence, cooperation can emerge.

Humans are driven not just by extrinsic motivations—for money, food, or sex—but also by intrinsic ones—for knowledge, competence, and connection. Research shows that giving robots and machine-learning algorithms intrinsic motivations, such as a sense of curiosity, can boost their performance on various tasks. In the new work, presented last week at the International Conference on Machine Learning, AIs were given a “social” drive.

“This is a truly fascinating article with a huge potential for expansions,” says Christian Guckelsberger, a computer scientist at Queen Mary University of London who studies AI and intrinsic motivation but was not involved in the work.

What Can AI Tell Us About Fine Art?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/what-can-ai-tell-us-about-fine-art

After analyzing more than 100,000 paintings, this AI concludes that the most beautiful images are not necessarily memorable

Journal Watch report logo, link to report landing page

Whether it’s the enigmatic playfulness of Mona Lisa’s smile or the swirling soft colors of a Monet painting, there are qualities of fine art that attract audiences, like a moth to a flame. What is it about these pieces that has captivated people throughout centuries? Researchers are now using machine-learning algorithms to tease apart these intricacies and explore the relationship between the aesthetics, sentimental value, and memorability of fine art.

Eva Cetinic is an art enthusiast and researcher at the Rudjer Boskovic Institute in Croatia. While she believes that art is indescribable in many ways, she wanted to challenge her own perspective by exploring how machine learning might quantify art. “The rise of artificial intelligence forces us to re-think what values are specifically human, and the understanding of art is a particularly fruitful playground for this kind of investigation,” she explains.

To start, Cetinic and her colleagues analyzed more than 100,000 images from WikiArt. Their results, published 5 June in IEEE Access, hint at common themes of what we find beautiful and captivating.

University of Southampton Uses the USRP and LabVIEW to Change the Way It Teaches Wireless Communications

Post Syndicated from National Instruments original https://spectrum.ieee.org/computing/software/university-of-southampton-uses-the-usrp-and-labview-to-change-the-way-it-teaches-wireless-communications

The University of Southampton has been looking at new and innovative ways to teach the principles of wireless communication at a time when there is significant interest in wireless technologies

Demonstrating the Practical Challenges of Wireless Communications

Most electronics education worldwide teaches wireless communications with a typical focus on  communications theory. At the University of Southampton, educators have taken a different outlook in teaching students the practical aspects of communication technology to better prepare them for their careers in industry. Students focus on the rapid prototyping of a wireless communications system with live radio frequency (RF) signal streaming for a practical approach to communications education. With this approach, students gain a valuable experience in manipulating live signals for a greater understanding of wireless communication and the associated practical challenges.

A Real Communications System to Demonstrate Practical Concepts

The University of Southampton have accomplished this demonstration of the practical concepts of wireless communication as part of their masters course in wireless communications. The focus was on creating a wireless communications system to demonstrate the concept of differential-quadrature phase-shift keying(DQPSK) and how it is used within wireless communications. The students were given a USRP™ (Universal Software Radio Peripheral) and tasked with building a DPSK transceiver in a practical session. Before this they attended a one-hour lecture on the USRP and how to use it to achieve their learning outcomes. Additionally were given a pre-session assignment to do, which familiarised them with LabVIEW and its environment.

Practical Challenges of Wireless Communication

Southampton students were tasked with building one half of a wireless communications system. The setup consisted of an incomplete DQPSK demodulator, which needed to be completed so that a modulated signal sent by a separate USRP device could be decoded. To complete this task, a number of steps covering different concepts are required so that the end result is a fully working communications system.

The students first applied a filter to the received and down-converted signal and compared this to the input of the filter in the transmitter of the system. They then down sampled the data to detect, synchronize, and extract the DPSK symbols from the waveform and compare them to those in the transmitter. Finally, students demodulated and decoded these DPSK symbols to recover the message bits, which are again compared with those in the transmitter.

After these three features were implemented into the demodulator, students rigorously tested their system by comparing their constellation graph and signal eye diagram to those of the transmitter, which is shown below.

The constellation diagram gives a visual overview of how the different phases in the phase-shift keying modulation scheme matched up to symbols and how they are represented within the signal envelope. They are important because they give a visual overview of how much interference or distortion is in a signal or channel and are a quick way of seeing if everything is functioning normally. The eye diagram gives a similar visual reference in that it helps show all of the different types of symbols within a channel superimposed over each other to see the characteristics of the system. From this students could infer characteristics such as if the symbols were too long, short, or noisy or poorly synchronized. If the eye is “open”, as it is in the above diagram, then it infers minimal distortion in the signal. If the signal was distorted, then the eye pattern begins to close, decreasing the spaces in the pattern.

Four Out of Five Students Would Like to Make More Use of USRPs

After the conclusion of the module on communications system, students completed questionnaires about their satisfaction and provided feedback on the practical session.

More than four out of five students, 82 percent, said that in the future they would like to make use of the USRP in the taught aspects of their course. In addition, 75 percent of students said that they would like to make use of the USRP in their MSc research projects—showing its great potential in all aspects of wireless communications education and research.

One student said that “The USRP gives an avenue for exploration. It is a good tool to bridge the gap between practical and theory.” Whilst another said that “The USRP vividly helps me understand the theory that I learned in class.” This shows that Southampton has created a strong benchmark in practical communications education.

Next Steps

See Other Academic Applications

Learn More About NI USRP

How AI is Starting to Influence Wireless Communications

Post Syndicated from National Instruments original https://spectrum.ieee.org/computing/software/how-ai-is-starting-to-influence-wireless-communications

Machine learning and deep learning technologies are promising an end-to-end optimization of wireless networks while they commoditize PHY and signal-processing designs and help overcome RF complexities

What happens when artificial intelligence (AI) technology arrives on wireless channels? For a start, AI promises to address the design complexity of radio frequency (RF) systems by employing powerful machine learning algorithms and significantly improving RF parameters such as channel bandwidth, antenna sensitivity and spectrum monitoring.

So far, engineering efforts have been made for smartening individual components in wireless networks via technologies like cognitive radio. However, these piecemeal optimizations targeted at applications such as spectrum monitoring have been labor intensive, and they entail efforts to hand-engineer feature extraction and selection that often take months to design and deploy.

On the other hand, AI manifestations like machine learning and deep learning can invoke data analysis to train radio signal types in a few hours. For instance, a trained deep neural network takes a few milliseconds to perform signal detection and classification as compared to traditional methodologies based on the iterative and algorithmic signal search and signal detection and classification.

It is important to note that such gains also significantly reduce power consumption and computational requirements. Moreover, a learned communication system allows wireless designers to prioritize key design parameters such as throughput, latency, range and power consumption.

More importantly, deep learning-based training models facilitate a better awareness of the operational environment and promise to offer end-to-end learning for creating an optimal radio system. Case in point: a training model that can jointly learn an encoder and decoder for a radio transmitter and receiver while encompassing RF components, antennas and data converters.

Additionally, what technologies like deep learning promise in the wireless realm is the commoditization of the physical layer (PHY) and signal processing design. Combining deep learning-based sensing with active radio waveforms creates a new class of use cases that can intelligently operate in a variety of radio environments.

The following section will present a couple of design case studies that demonstrate the potential of AI technologies in wireless communications.

Two design case studies

First, the OmniSIG software development kit (SDK) from DeepSig Inc. is based on deep learning technology and employs real-time signal processing to allow users to train signal detection and classification sensors.

DeepSig claims that its OmniSIG sensor can detect Wi-Fi, Bluetooth, cellular and other radio signals up to 1,000 times faster than existing wireless technologies. Furthermore, it enables users to understand the spectrum environment and thus facilitate contextual analysis and decision making.

ENSCO, a U.S. government and defense supplier, is training the OmniSIG sensor to detect and classify wireless and radar signals. Here, ENSCO is aiming to deploy AI-based capabilities to overcome the performance limitations of conventionally designed RF systems for signal intelligence.

What DeepSig’s OmniPHY software does is allow users to learn the communication system, and subsequently optimize channel conditions, hostile spectrum environments and hardware performance limitations. The applications include anti-jam capabilities, non-line-of-sight communications, multi-user systems in contested spectrums and mitigation of the effects of hardware distortion.

Another design case study showing how AI technologies like deep learning can impact future hardware architectures and designs is the passive Wi-Fi sensing system for monitoring health, activity and well-being in nursing homes (Figure 2). The continuous surveillance system developed at Coventry University employs gesture recognition libraries and machine learning systems for signal classification and creates a detailed analysis of the Wi-Fi signals that reflect off a patient, revealing patterns of body movements and vital signs.

Residential healthcare systems usually employ wearable devices, camera-based vision systems and ambient sensors, but they entail drawbacks such as physical discomfort, privacy concerns and limited detection accuracy. On the other hand, a passive Wi-Fi sensing system, based on activity recognition and through-wall respiration sensing, is contactless, accurate and minimally invasive.

The passive Wi-Fi sensing for nursing homes has its roots in a research project on passive Wi-Fi radar carried out at University College London. The passive Wi-Fi radar prototype —based on software-defined radio (SDR) solutions from National Instruments (NI) — is completely undetectable and can be used in military and counterterrorism applications.

USRP transceiver plus LabVIEW

A passive Wi-Fi sensing system is a receive-only system that measures the dynamic Wi-Fi signal changes caused by moving indoor objectives across multiple path propagation. Here, AI technologies like machine learning allow engineers to use frequency to measure the phase changing rate during the measurement duration as well as Doppler shift to identify movements.

Machine learning algorithms can establish the link between physical activities and the Doppler-time spectral map associated with gestures such as picking things up or sitting down. The phase of the data batches is accurate enough to discern the small body movements caused by respiration.

Coventry University built a prototype of a passive Wi-Fi sensing system using Universal Software Radio Peripheral (USRP) and LabVIEW software to capture, process and interpret the raw RF signal samples. LabVIEW, an intuitive graphical programming tool for both processors and FPGAs, enables engineers to manage complex system configurations and adjust signal processing parameters to meet the exact requirements.

On the other hand, USRP is an SDR-based tunable transceiver that works in tandem with LabVIEW for prototyping wireless communication systems. It has already been used in prototyping wireless applications such as FM radio, direction finding, RF record and playback, passive radar and GPS simulation.

Engineers at Coventry University have used USRP to capture the raw RF samples and deliver them to the LabVIEW application for speedy signal processing. They have also dynamically changed the data arrays and batch size of analysis routines to adapt the system to slow and fast movements.

Engineers were able to interpret some captured signals and directly link the periodic change of batch phase with gestures and respiration rate. Next, they examined if the phase of the data batches was accurate enough to discern the small body movements caused by respiration.

AI: The next wireless frontier

The above design examples show the potential of AI technologies like machine learning and deep learning to revolutionize the RF design, addressing a broad array of RF design areas and creating new wireless use cases.

These are still the early days of implementing AI in wireless networks. But the availability of commercial products such as USRP suggests that the AI revolution has reached the wireless doorstep.

For more information on the role of AI technologies in wireless communications, go to Ettus Research, which provides SDR platforms like USRP and is a National Instruments’ brand since 2010.

 

 

Free Download: EMI step-by-step guide from Rohde & Schwarz

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/free-download-emi-stepbystep-guide-from-rohde-schwarz

Solve your EMI problems more efficiently with solutions from Rohde & Schwarz.

With this guide, you are now able to discover and analyze EMI in a more systematic and methodical approach to solve your problems.

img

New Zealand Startup Seeks to Automate (Most) Code Review

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/how-codelingo-helps-teams-develop-better-software

CodeLingo has developed a tool that it says can help developers with code reviews, refactoring, and documentation

Software developers are force-multipliers. Yet, instead of spending their time building new products or services, software developers are wasting too much of it on maintaining existing code.

CodeLingo, a New Zealand-based startup founded in 2016, aims to change that. The company has developed an automated code review tool that catches new and existing issues in code. CodeLingo’s search engine and query language finds patterns across the code base and uses those patterns to automate code reviews, code refactoring (restructuring existing code to optimize it), and contributor documentation.

According to a 2018 study by Stripe, developers could increase the global GDP by US $3 trillion over the next 10 years. But developers spend almost half of their working hours—that’s 17 hours in an average 41-hour work week—on code maintenance. This includes finding and repairing bugs, fixing bad code, and refactoring. This equates to an estimated $300 billion loss in productivity each year.

CodeLingo hopes to recapture some of that loss so it could be spent on what matters. “CodeLingo is, in essence, an analysis platform,” says founder Jesse Meek. “It treats the whole software stack as data, then looks for patterns in that data and ways to automate common development workflows, such as finding and fixing bugs, automatically refactoring the code base, automating reviews of pull requests as they come into a repository, and automating the generation of contributor documentation.”

Get tips to develop your DAQ test systems

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get-tips-to-develop-your-daq-test-systems

Reduce your test development time, increase throughput and improve the accuracy of your test systems

There is a growing trend across all industries to design feature-rich products. You need to thoroughly test your product while meeting market windows and project deadlines. Learn how a data acquisition system could help you achieve all of these goals in this Ebook entitled, Four Things to Consider When Using a DAQ as a Data Logger

img

Rohde & Schwarz Presents: Smart Jammer / DRFM Testing – Test and Measurement Solutions for the Next Level

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/rohde-schwarz-presents-smart-jammer-drfm-testing-test-and-measurement-solutions-for-the-next-level

The webinar introduces the concept of Digital RF Memory Jammers, describes their technology and the respective test and measurement challenges and solutions from Rohde & Schwarz.

The DRFM jammer has become a highly complex key element of the EA suite. It has evolved from a simple repeater with some fading capabilities to a complex electronic attack asset. Some of the more critical tests are verifying proper operation and timing of the deception techniques on the system level, qualifying the individual components, submodules and modules at the RF/IF level, and last but not least making sure that clock jitter and power integrity are addressed early at the design stage. For all these requirements, Rohde & Schwarz offers cutting edge test and measurement solutions. The webinar introduces the concept of Digital RF Memory Jammers, describes their technology and the respective Test and Measurement challenges and solutions from Rohde & Schwarz.

Please note: By downloading a webinar, you’re contact information will be shared with the sponsoring company, Rohde & Schwarz GmbH & Co.KG and the Rohde & Schwarz entity or subsidiary company mentioned in the imprint of www.rohde-schwarz.com, and you may be contacted by them directly via email or phone for marketing or advertising purposes.

Download for FREE: EMI step-by-step guide from Rohde & Schwarz

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/download-for-free-emi-stepbystep-guide-from-rohde-schwarz

Be able to discover & analyze EMI in a more systematic & methodical approach to solve your problems.

In our free step-by-step guide, we break down the whole EMI design test process into “Locate”, “Capture”, and “Analyze”. Download & learn more.

image

Early Warning System Predicts Risk of Online Students Dropping Out

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/a-predictive-modeling-tool-for-identifying-postsecondary-students-at-risk-of-dropping-out

With the new system, every student is scored based on how likely they are to finish their courses

Journal Watch report logo, link to report landing page

It’s easy enough for students to sign up for online university courses, but getting them to finish is much harder. Dropout rates for online courses can be as high as 80 percent. Researchers have tried to help by developing early warning systems that predict which students are more likely to drop out. Administrators could then use these predictions to target at-risk students with extra retention efforts. And as these early warning systems become more sophisticated, they also reveal which variables are most closely correlated with dropout risk.

In a paper published 16 April in IEEE Transactions on Learning Technologies, a team of researchers in Spain describe an early warning system that uses machine learning to provide tailored predictions for both new and recurrent students. The system, called SPA (a Spanish acronym for Dropout Prevention System), was developed using data from more than 11,000 students who were enrolled in online programs at Madrid Open University (UDIMA) over the course of five years.

Amateurs’ Al Tells Real Rembrandts From Fakes

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/software/the-rembrandt-school-of-ai-an-algorithm-that-detects-art-forgery

In their spare time, a Massachusetts couple programmed a system that they say accurately identifies Rembrandts 90 percent of the time

A new AI algorithm may crack previously inaccessible image-recognition and analysis problems—especially those stymied by AI training sets that are too small, or whose individual sample images are too big and full of high-resolution detail that AI algorithms cannot process. Already, the new algorithm can detect forgeries of one famous artist’s work, and its creators are actively searching for other areas where it could potentially improve our ability to transform small data sets into ones large enough to train an AI neural network.

According to two amateur AI researchers, whose study is now under peer review at IEEE Transactions on Neural Networks and Learning Systems, the concept of entropy, borrowed from thermodynamics and information theory, may help AI systems uncover fake works of art.

In physical systems such as boiling pots of water and black holes, entropy concerns the amount of disorder contained within a given volume. In an image file, entropy is defined as the amount of useful, nonredundant information the file contains.