In the ongoing race to make and break digital codes, the idea of perfect secrecy has long hovered on the horizon like a mirage. A recent research paper has attracted both interest and skepticism for describing how to achieve perfect secrecy in communications by using specially-patterned silicon chips to generate one-time keys that are impossible to recreate.
Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/personal-virtual-networks-control-data
To keep us connected, our smartphones constantly switch between networks. They jump from cellular networks to public Wi-Fi networks in cafes, to corporate or university Wi-Fi networks at work, and to our own Wi-Fi networks at home. But we rarely have any input into the security and privacy settings of the networks to which we connect. In many cases, it would be tough to even figure out what those settings are.
A team at Northeastern University is developing personal virtual networks to give everyone more control over their online activities and how their information is shared. Those networks would allow a person’s devices to connect to cellular and Wi-Fi networks—but only on their terms.
Adding EDR capabilities into your AWS (Amazon Web Services) environment can inform investigations and provide actionable details for remediation. Attend this webinar to discover how to unpack and leverage the telemetry provided by endpoint security solutions using MITRE Cloud examples, such as Exploit Public-Facing Application (T1190) and Data Transfer to Cloud Account (T1537) by examining process trees. You will also find out how these solutions can help identify who has vulnerable software or configurations on their systems by leveraging indicators of compromise (IOC) to pinpoint the depth and breadth of malware (MD5).
Many Internet of Things (IoT) devices rely on RSA keys and certificates to encrypt data before sending it to other devices, but these security tools can be easily compromised, new research shows.
Researchers from digital identity management company Keyfactor were able to compromise 249,553 distinct keys corresponding to 435,694 RSA certificates using a single virtual machine from Microsoft Azure. They described their work in a paper presented at the IEEE Conference on Trust, Privacy, and Security in Intelligent Systems and Applications in December.
“With under $3,000 of compute time in Azure, we were able to break 435,000 certificates,” says JD Kilgallin, Keyfactor’s senior integration engineer and researcher. “We showed that this attack is very easy to execute now.”
Post Syndicated from Neil Savage original https://spectrum.ieee.org/tech-talk/telecom/security/mixing-quantum-states-boosts-fiber-communications
It’s difficult to send quantum information over the fiber-optic networks that carry most of the world’s data, but being able to would allow people to encrypt their messages with secret codes made unbreakable by the laws of physics. Now researchers have found a way to allow the transmission of such codes over long distances, by combining different quantum properties on the same photons.
Amidst rising tensions after the United States killed Qassem Soleimani, the chief of Iran’s Quds Force, in a drone strike in Baghdad last week, security experts and U.S. government officials warn that Iran may retaliate with cyberattacks.
Iran-based attack groups have expanded their digital offensive capabilities significantly since 2012, when they launched crippling distributed denial-of-service attacks against financial services companies. Since then, the cybersecurity arm of Iran’s Islamic Revolutionary Guard Corps, and private sector contractors acting on behalf of the government, have added tools to their arsenals.
Those tools enable attackers to execute account takeovers and spear phishing campaigns to steal intellectual property and sensitive information, and include destructive malware designed to disrupt operations, according to the National Cyber Awareness System alert issued by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) earlier this month.
Iran has also “demonstrated a willingness” to use wiper malware, CISA said in its 6 January alert. Wipers refer to a category of malware which erase the contents of the hard drive of an infected machine and then destroy the computer’s master boot record to make it impossible for the machine to boot up again. Just like any other type of malware, wipers rely on various methods for the initial infection, and once in, can steal information or execute unauthorized code. The difference is that wipers don’t care about being stealthy because the primary purpose is to render the machine unusable.
Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-leverage-a-casb-for-your-aws-environment
Post Syndicated from Fahmida Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/the-fight-over-encrypted-dns-boils-over
Privacy and security specialists are in the middle of a very public fight over the future of Internet encryption. In September, cable companies and other telecommunications industry groups in the United States sent a letter to Congress protesting Google’s plans to encrypt domain name servers (DNS) in the browser. Mozilla sent a letter [PDF] of its own this month, asking lawmakers to reject the industry’s lobbying efforts, saying they were based on “factual inaccuracies.”
At stake is how DNS traffic—the network queries that translate people-friendly domain names into server IP addresses—should be encrypted.
Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-build-a-threat-hunting-capability-in-aws
Every sound we hear has a unique signature thanks to the way it was created and which objects the sound waves have passed through. A team of South Korean researchers are now exploring whether the unique bioacoustic signatures created as sound waves pass through humans can be used to identify individuals. Their work, described in a study published 4 October in IEEE Transactions on Cybernetics, suggests this technique can identify a person with 97 percent accuracy.
The biometric system developed by the group at Electronics and Telecommunications Research Institute (ETRI) uses a transducer to generate vibrations and thus sound waves, which pass through a given body part on a person. In this a case, a finger is easily accessible and convenient. After the sound has passed through the skin, bones, and other tissues, a sensor picks up the unique bioacoustic signature. Teasing apart the distinct signatures of individuals is further boosted using modeling.
“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”
The approach is effective enough to distinguish different fingers on the same hand. This means that a person must use the same finger that was originally analyzed for authentication.
Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.
“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”
While measuring changes in acoustic vibrations is fairly accurate, it does not yet match the accuracy of fingerprints or iris scans. In the future, Sim’s team plans to add additional sensors to the system to boost its accuracy. In the meantime, they are exploring ways to commercialize the technology in phones and wearable devices.
What’s more, development of this technology has come with an unexpected twist. The technique is so accurate at analyzing tissues, the team is now exploring ways to use it for diagnosing musculoskeletal disease.
A line of onlookers stands before a video camera in Stevens Institute of Technology’s S.C. Williams Library, hopeful. Their task: crack a lock that could open, say, a safety deposit box or a bank account or a social security record. Each visitor stares intently into the camera lens. Nothing.
Then physics professor Yuping Huang strides up to the mark.
Click. The lock opens instantly.
“Fortunately,” smiles Huang, “I am one of only three people whose face is allowed by this system to unlock this lock.”
Facial locks are nothing new. Your phone probably has one, or soon will.
But what is remarkable is how this lock works — involving simultaneously creating twin particles of energy that somehow communicate with one another, across distances.
“These quantum properties are going to change the internet,” predicts Huang, who directs the university’s Center for Quantum Science & Engineering and works with graduate students including Lac Nguyen and Jeeva Ramanathan on the quantum lock project. “One big way it will do that is in the enabling of security applications like this one, except on much larger scales.
“If it turns out this technology can be deployed in our homes and offices, as we believe it can be, eavesdroppers will be nearly powerless to sneak into the ever-more connected networks of devices that help run our lives but also hold much of our personal data.”
Welcome to the weird world of quantum communications.
But how does the system work?
When people stand in front of the camera attached to the lock, the Stevens setup captures information about each person’s face and sends it over the internet to a server housed in a different part of the university. There, facial-recognition computations and matches are done using open-source software. (However, Huang’s team is also working on bringing quantum physics to that step, too; stay tuned.)
While this may seem like a pretty standard computation so far, a key distinction occurs in its networking security: the data exchanged between the two parties is secured by fundamental laws of physics.
As facial photos are taken by the video camera, lasers in Huang’s physics lab create twin photons — tiny, power-packed particles of energy — by splitting beams of light with special crystals.
The twin photons are then separated. One photon is kept in the lab while the other is sent through fiber-optic lines back to the library. Complex, secret “keys” are instantly generated as the photons are detected at each site; this process will ensure that the secure information meets up with a trusted partner at the other end of the transaction.
The keys serve as what’s known in cryptography as a “one-time pad”: a temporary, uncrackable code between the parties that encrypt the images and communications, preventing any hacker from intercepting them.
“We don’t know why quantum properties work this way,” explains Huang, shaking his head. “Even Einstein didn’t know. But they work, always, so far as we can measure. And a whole host of computing, financial and security applications will be coming down the road in our lifetime to leverage the power of those properties.”
“This prototype demonstrates the drop-in compatibility of our quantum key-distribution system for secure networked devices,” agrees Nguyen. “Wider adoption of this technology could protect the communications of corporations, governments and intelligence services.
“Some of the potential applications we already see include corporate and government data centers; military base communications; voting processes; and smart-city monitors, to name just a few.”
The system could also bring secure privacy to the individual, adds the Stevens team, including for such applications as controlling systems in one’s home remotely; communicating with a corporate office from home, or securing a home wireless network.
Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/telecom/security/making-the-ultimate-software-sandbox
Is it possible to process highly sensitive data (say, a person’s genome or their tax returns) on an untrusted, remote system without ever exposing that data to that system? Some of the biggest names in tech are now working together to figure out a way to do just that.
Last month, the Linux Foundation announced the Community Computing Consortium (CCC)—a cross-industry effort to develop and standardize the safeguarding of data even when it’s in use by a potentially untrusted system. Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent are all CCC founding members.
Encrypting data at rest and in transit, the Foundation said in a press statement, are familiar challenges for cloud computing providers and users today. But the new challenge the Consortium is taking up concerns allowing sensitive, encrypted data to be processed by a system that otherwise would have no access to that data.
They may want to take a look an an open-source project first launched in 2016 called Ryoan.
“In our model, not only is the code untrusted, the platform is also untrusted,” says Emmett Witchel, professor of computer science at the University of Texas at Austin.
Witchel and co-authors developed Ryoan as a piece of code that would use security features in Intel CPUs to effectively sandbox encrypted data—and allow computation on that data without requiring that either the software or the hardware (other than the CPU) be secure or trusted.
Witchel says a key inspiration for his group was a 1973 paper by Butler Lampson of Xerox’s Palo Alto Research Center. “He talked about how difficult it is to confine untrusted code,” Witchel says. “That means if you have code that wasn’t written by you, you don’t know the motivations of the people who wrote it, and you want to make sure that code isn’t stealing any secrets [from your data], it’s very, very difficult.”
Witchel says one of his graduate students at the time argued that confining data completely within a sandbox—in the face of potentially adversarial code and hardware—was practically impossible.
Witchel adds that he agreed with his student, up to a point. “But it’s not 1973. And people are doing different things with computers now,” he says. “They’re recognizing images. They’re processing genome data. These are very specific tasks that have properties that we can take advantage of.”
According to Witchel, Ryoan uses what’s called a “one-shot data model.” That means the program looks at a user’s sensitive data only once in passing—a scheme that might be applicable to rapid-fire video or image streams that run image recognition software.
It was this transitory, one-time nature of the data processing that simplified Lampson’s Confinement Problem and made it solvable, Witchel says.
On the other hand, you couldn’t use Ryoan if you were processing a sensitive image on a remote, untrusted system that was storing and processing—and then storing and further processing—one’s data. Such a process may be necessary if, say, you were running Photoshop remotely on a sensitive image.
Ryoan relies on an Intel hardware security feature called Software Guard Extensions (SGX), which allowed its creators to begin addressing the problem. However, SGX is only a first step, says Tyler Hunt, a University of Texas, Austin computer science graduate student and co-developer of Ryoan.
“SGX only allows you to have 128 megabytes of physical memory, so there’s some challenge in finding applications that are small enough,” Hunt says. “And genomes are very large, so if you actually wanted to process an entire genome, you’d need some larger hardware to do it.”
Witchel and Hunt say there are other efforts now afoot to make “enclaves” like Ryoan (in other words—isolated, trusted computing environments) in larger and more diverse systems than just a CPU.
Microsoft researchers last year proposed a software environment called Graviton, which would run trusted computing enclaves on GPUs. RISC-V architecture hardware systems may soon be adopting a trusted computing enclave standard called Keystone. And ARM—a CCC member—offers its own trusted computing environment called TrustZone.
Witchel says trusted enclaves like Ryoan and its descendants could be important as users become more savvy about the permissions they give for use of their personal data.
People today could be better educated, Witchel says, so they can understand and assert their rights. They need to know “that their data is out there and is being used and monetized. And that they should have some control and some assurance that their data is only being used for the purposes that they released it for. I want my genome data to tell me about my ancestry. I don’t want you to keep that genome data to use it to develop a drug that you then sell to me at an inflated price. That seems unfair.”
Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/how-language-shapes-chinese-and-english-password-security
No matter the differences in language and culture, both Chinese- and English-language Internet users apparently find common ground in using easily guessable password variants of “123456.” But a recent study comparing password patterns among the two languages also found notable and unique features in Chinese passwords that have big implications for Internet security beyond China.
NIST has enlisted researchers from academia and private industry to get quantum-resistant cryptography ready for 2022
Kaspersky says the group used an HTML-based exploit that’s almost impossible to detect
Following a trail of suspicious digital crumbs left in cloud-based systems across South Asia, Kaspersky Lab’s security researchers have uncovered a steganography-based attack carried out by a cyberespionage group called Platinum. The attack targeted government, military, and diplomatic entities in the region.
Platinum was active years ago, but was since believed to have been disarmed. Kaspersky’s cyber-sleuths, however, now suspect that Platinum might have been operating covertly since 2012, through an “elaborate and thoroughly crafted” campaign that allowed it to go undetected for a long time.
The group’s latest campaign harnessed a classic hacking tool known as steganography. “Steganography is the art of concealing a file of any format or communication in another file in order to deceive unwanted people from discovering the existence of [the hidden] initial file or message,” says Somdip Dey, a U.K.-based computer scientist with a special interest in steganography at the University of Essex and the Samsung R&D Institute.
Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/security/digital-doppelgngers-fool-advanced-antifraud-tech
With traces of a user’s browsing history and online behavior, hackers can build a fake virtual “twin” and use it to log in to a victim’s accounts
As new security technologies shield us from cybercrime, a slew of adversarial technologies match them, step for step. The latest such advance is the rise of digital doppelgängers—virtual entities that mimic real user behaviors authentic enough to fool advanced anti-fraud algorithms.
In February, Kaspersky Lab’s fraud-detection teams busted a darknet marketplace called Genesis that was selling digital identities starting from US $5 and going up to US $200. The price depended on the value of the purchased profile—for example, a digital mask that included a full user profile with bank login information would cost more than just a browser fingerprint.
The masks purchased at Genesis could be used through a browser and proxy connection to mimic a real user’s activity. Coupled with stolen (legitimate) user accounts, the attacker was then free to make new, trusted transactions in their name—including with credit cards.
Kaspersky security researchers described how hackers used software updates to push malware onto victims’ computers
When security researchers at Kaspersky Lab disclosed Operation ShadowHammer in March, they described how attackers tampered with software updates from PC-maker ASUSTeK Computer to install malware on victims’ computers. Now, new details revealed last week indicate the operation was even more insidious—it sabotaged developer tools, an approach that could spread malware much faster and more discreetly than conventional methods.
In ShadowHammer, a sophisticated group of attackers modified an old version of the ASUS Live Update Utility software and pushed out the tampered copy to ASUS computers around the world, said Kaspersky Lab. The Live Update Utility, which comes preinstalled in most new ASUS computers, automatically updates the set of firmware instructions that control the computer’s input and output operations, hardware drivers, and applications. The modified tool, signed with legitimate ASUSTeK certificates and stored on official servers, looked like the real thing. But once it was planted, it gave the attackers the ability to control the computer through a remote server and install additional malware.
ShadowHammer is an “example of how sophisticated and dangerous a smart supply chain attack can be,” said Vitaly Kamluk, Kaspersky’s director of the global research and analysis team.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.