Tag Archives: Computing/Software

Interactive e-Learning Platform Boosts Performance of New Musicians

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/interactive-elearning-platform-boosts-performance-of-new-musicians

While many budding musicians find joy in playing their instruments, not all are as enthusiastic to learn about music theory or the nuances of sound. To make lessons more engaging for music students, a group of researchers in Slovenia have created a new e-learning platform called Troubadour.

The platform can be adapted to different music curriculums and includes gaming features to support student engagement. A controlled study, published 15 May in IEEE Access, shows that Troubadour is effective at boosting the exam performance of first-year music students.

Matevž Pesek, a researcher at University of Ljubljana who helped build the platform, began playing the accordion at age eight, and has since taken up the keyboard, guitar, and Hammond organ. When he first started playing music—like many children and older beginners—he struggled with some aspects of the learning curve.

“I never completely liked the fact I needed to practice to become more proficient. Moreover, I perceived music theory as a separate problem, completely unconnected to the instrument practice,” he says. “It was only later in my adulthood when I somehow became aware of the importance of the music theory and its connection to the instrument playing.”

Pesek saw an opportunity to create Troubadour. While several online music learning platforms exist, he points out that these are not adaptable to school curriculums and many are only available in English.

“The lack of flexibility–where teachers cannot adjust the exercises according to their curriculum–and the language barrier motivated us to develop a solution for the Slovenian students,” says Pesek. “We have also made the platform’s source code publicly available for other interested individuals and communities; they can expand the platform’s applications, translate the platform to their native language, and also help us further develop the platform.”

With Troubadour, teachers select what features they want incorporated into the music exercise, and an algorithm automatically generates sound sequences to support the exercise. Students then access the Web-based platform to complete the interval diction exercises. In these exercises, melodic sequences are played and, upon recognizing the sequences, students record their answer. To make the overall exercises more engaging for students, the researchers added gaming features such as badges and a scoreboard that allows students to see where they rank against their peers.

In their study, Pesek and his colleagues evaluated the effectiveness of Troubadour as a study tool for students enrolled in a music theory course at the Conservatory of Music and Ballet Ljubljana. The data they captured included platform use and exam scores, as well as student and teacher feedback through surveys.

The results showed that, while there was a minimal benefit for second-year music students, first-year students who used Troubadour achieved an average exam score that was 9.2 percent better than those who didn’t. The teachers attributed this performance increase to better student engagement and the fact that the level of music experience and proficiency among first-year students varies.

The researchers have since expanded upon Troubadour to include rhythmic diction exercises, and are now working on harmonic exercises. “We also plan on including several different tools to aid the in-platform communication between teachers and students, and plan to support online exams within the platform,” says Pesek.

Software Development Environments Move to the Cloud

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/software-development-environments-cloud

As a newly hired software engineer, setting up your development environment can be tedious. If you’re lucky, your company will have a documented, step-by-step process to follow. But this still doesn’t guarantee you’ll be up and running in no time. When you’re tasked with updating your environment, you’ll go through the same time-consuming process. With different platforms, tools, versions, and dependencies to grapple with, you’ll likely encounter bumps along the way.

Austin-based startup Coder aims to ease this process by bringing development environments to the cloud. “We grew up in a time where [Microsoft] Word documents changed to Google Docs. We were curious why this wasn’t happening for software engineers,” says John A. Entwistle, who founded Coder along with Ammar Bandukwala and Kyle Carberry in 2017. “We thought that if you could move the development environment to the cloud, there would be all sorts of cool workflow benefits.”

Tech Volunteers Help Overloaded U.S. Government Agencies

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/tech-volunteers-help-overloaded-government-agencies

When U.S. Digital Response launched 16 March, it was four colleagues who wanted to pool their collective experience running public-sector technology programs to help government agencies that were buckling under COVID-19.

Since then, the all-volunteer group has scaled exponentially, placing more than 150 people with a range of digital skills into more than 150 short-term or ongoing assignments at 25 agencies at all levels of government, including with state labor departments struggling to keep up with new claims for unemployment insurance benefits.

As of early May, U.S. Digital Response had amassed a database of more than 4,850 other prospective volunteers who filled out the online application on the group’s website to donate their time. The group continues to accept applications for volunteers with digital, policy, and communications skills, and to encourage public agencies to fill out an online form if they need help.  

Coding for COVID-19: Contest Calls on Developers to Help Fight the Pandemic

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/coding-covid19-ibm-contest-calls-developers-pandemic

Now in its third year, IBM’s Call for Code challenge is a global initiative encouraging developers to create solutions for the world’s most pressing issues. Previous competitions called for apps to mitigate the effects of natural disasters and technologies that can assist people after catastrophes. This year’s challenge—a partnership with the United Nations’ Human Rights Office and the David Clark Cause—offers two tracks to tackle climate change and the COVID-19 pandemic.

Australia’s Contact-Tracing COVIDSafe App Off to a Fast Start

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/australias-contact-tracing-covidsafe-app

The Australian government launched its home-grown COVIDSafe contact-tracing app for the new coronavirus on 26 April. And despite the government’s history of technology failures and misuse of personal data, smartphone users have been eager to download the opt-in software on Apple’s App Store and on Google Play. But if the government is to achieve its target of 10 million downloads, there’s still a ways to go.

What Are Deepfakes and How Are They Created?

Post Syndicated from Sally Adee original https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created

A growing unease has settled around evolving deepfake technologies that make it possible to create evidence of scenes that never happened. Celebrities have found themselves the unwitting stars of pornography and politicians have turned up in videos appearing to speak words they never really said.

Concerns about deepfakes have led to a proliferation of countermeasures. New laws aim to stop people from making and distributing them. Earlier this year, social media platforms including Facebook and Twitter banned deepfakes from their networks. And computer vision and graphics conferences teem with presentations describing methods to defend against them.

So what exactly is a deepfake, and why are people so worried about them?

New Software Streams Apps to Save Space on Your Phone

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/new-software-streams-apps-to-save-space-on-your-phone

The way we consume multimedia content has changed over time, and so has the technology that delivers it. What began as downloading evolved into streaming, with remote copies of audio or video files transmitted over the Internet in real time—eliminating the need for a local copy stored on a device.

But what if streaming could apply to mobile apps as well? That’s the idea behind AppStreamer [PDF], a new program designed to reduce app storage requirements through predictive streaming. The software was devised by a team of researchers from Prince of Songkla University in Thailand, Purdue University, University of Illinois at Urbana-Champaign, AT&T Labs Research, and Georgia Institute of Technology, with support from AT&T and the National Science Foundation.

Mainframes Are Having a Moment

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/mainframes-programming-language-cobol-news-coronavirus

If there’s a silver lining to state unemployment insurance systems’ failings caused by the COVID-19 crisis, it’s the attention being paid to the need for people who can program mainframes and work on enterprise-level technology.

Institutions of higher education and companies that sell and rely on mainframe tech are using the situation to trumpet the number of well-paid mainframe programmer and system administrator jobs and the need to train people for them.

Although many college and university computer science departments have cut back or dropped mainframe programming curriculum to focus on more modern languages and technologies, faculty and staff at others report an uptick in interest in Cobol and related classes. The increase began well before pandemic-related layoffs inundated state unemployment agency computer systems, causing government officials to put out the call for programmers who know Cobol to step in and help.

Cobol Programmers Answer Call to Shore Up Unemployment Benefits Systems

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/cobol-programmers-answer-call-unemployment-benefits-systems

Cobol programmers in the United States are heeding the call to work on antiquated state unemployment benefits computer systems that are straining under the unprecedented increase in claims filed because of COVID-19.

Applications for jobless benefits have soared in recent weeks. People who were laid off after employers curtailed operations or shut down completely because of the new coronavirus filed 6.6 million new claims for benefits in the week ending 4 April. The new claims brought the three-week total to more than 16 million, the equivalent of a tenth of the U.S. workforce.

The spike in new claims has inundated benefits computer systems in states such as Connecticut, Florida, and elsewhere, some of which haven’t updated their Cobol-based mainframe systems in years, or decades.

New Jersey Gov. Phil Murphy drove that point home during an 4 April press conference when he mistakenly referred to needing programmers with “Cobalt” skills to work on the state’s 40-year-old unemployment benefits system. “There’ll be lots of post-mortems and one of them on our list will be how the heck did we get here when we literally needed Cobol programmers,” Murphy said in the press conference.

New Jersey isn’t alone. Florida’s unemployment claims system has been so overwhelmed, the state is reverting to using paper applications. Massachusetts deployed more than 500 new employees to work remotely to meet increased demand that has overloaded its unemployment system.

Connecticut’s Department of Labor shelved work on an updated jobless benefits system in order to manage the influx of new requests caused by the economic downturn related to the virus. In the past three weeks, the department processed more new applications than it would normally handle in 18 months and currently has a six-week backlog, according to state officials.

Connecticut’s labor department is bringing back retirees and using IT staff from other departments to upgrade its 40-year-old system, which runs on a Cobol mainframe and connected components. The system is not fully automated, and requires manual actions at multiple points in the process, according to Nancy Steffens, the department’s head of communications. “I don’t have any info to provide to you other than some of the retirees returning to work are programmers knowledgeable in Cobol,” Steffens said.

An Oldie but a Goodie

Co-developed by pioneering computer programmer Grace Hopper in 1959, Cobol remains widely used in government and by financial institutions in part because it’s able to handle large processing volumes but also because of what it would cost in time and money to replace. In addition to state governments, multiple federal agencies still use it, according to a 2016 report from the U.S. Government Accountability Office. Cobol powers 95 percent of ATM swipes, 80 percent of in-person transactions and 43 percent of banking systems, according to Reuters.

Despite being so ubiquitous, there aren’t a lot of programmers who work in it. In Spectrum’s 2019 list of top programming languages, Cobol ranked 44th.

The current crisis could change that. Since January, the share of Indeed job postings per million that mention “Cobol” have increased by 6.47 percent, according to a spokesperson for the popular job board.

As states struggle, seasoned programmers are lining up to help. In recent weeks, Cobol Cowboys has been inundated with inquiries from veteran programmers interested in putting their Cobol skills to work. The Gainesville, Texas, firm operates as a job placement agency to match programmers who work as independent contractors with public and private sector projects that fit their skills.

In the past three years, the company’s database of programmers who know Cobol and other, more modern languages, has grown from 50 to close to 350. Their average age is between 45 and 60. “We have an older gentleman, a man who did some work with Grace Hopper, who I’d say is in his mid-80s,” said Eileen Hinshaw, the company’s chief operating officer.

Cobol Cowboys contacted the state of New Jersey after seeing Gov. Murphy’s press conference, and is currently “in communications with the state,” Hinshaw said.

Other Programmers Ready to Help

Long-time programmers aren’t the only ones eager to help. Hasnain Attarwala also contacted New Jersey after seeing the governor’s press conference. Attarwala, 30, is a month shy of earning a bachelor’s degree in computer science from Northern Illinois University (NIU), where he’s studied mainframe computers.

Attarwala, who is the student chair of NIU’s Association of Computing Machinery chapter, has a job lined up after graduation, but wants to donate his time now. In the past week, he collected names of other NIU students who want help and talked to a volunteer coordinator for U.S. Digital Response, which is helping New Jersey line up volunteers with digital and other skills. As of 9 April, Attarwala was waiting to hear if his services were needed.

U.S. Digital Response was formed last month by a group of public-interest technologists, including some who worked in technology roles in the Obama administration. The group is acting as a clearinghouse for federal and state agencies that need assistance and volunteers with digital skills who want to provide it. In addition to screening volunteers, the group designed New Jersey’s tech talent volunteer application form, among other projects.

More than 3,500 people have added their names to U.S. Digital Response’s volunteer pool, although the number of people who’ve been placed so far is well below that, according to Cori Zarek, one of the group’s co-founders and a deputy U.S. Chief Technology Officer from 2016 to 2017.

The group doesn’t collect demographic data so it’s impossible to know people’s ages or employment status, Zarek said. Still, “We’ve seen lots of seasoned veterans of these mainframe systems raising their hands. It’s incredible. It’s not just in New Jersey. We are absolutely eager to understand who can bring those skills to be ready to solve some of these problems,” said Zarek, who also runs the Digital Service Collaborative at Georgetown University’s Beeck Center for Social Impact.

States that upgraded unemployment claims systems in recent years may be better able to weather the onslaught of claims. Several formed consortiums to create a core unemployment insurance system that can be tailored to meet each one’s requirements. One consortium is ReEmployUSA, which started in Mississippi, and was subsequently adopted by Maine and Rhode Island, though not without problems. Connecticut was in the process of switching to ReEmployUSA before COVID-19 unraveled those plans.

Q&A: Sourcegraph’s Universal Code Search Tool

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/sourcegraph-universal-code-search-tool

In software development, code search is a way to better navigate and understand code. But it’s an often overlooked technique, with development tools and coding environments offering clunky and limited search functionalities.

Tech startup Sourcegraph aims to change that with its universal code search tool by the same name that makes searching code as seamless as doing a Google search on the web. To achieve that efficiency, Sourcegraph models code and its dependencies as a graph, and performs queries on the graph in real time.

Show The World You Can Write A Cool Program Inside A Single Tweet

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/computing/software/show-the-world-you-can-write-a-cool-program-inside-a-single-tweet

Want to give your coding chops a public workout? Then prove what you can do with the BBC Micro Bot. Billed as the world’s first “8-bit cloud,” and launched on 11 February, the BBC Micro Bot is a Twitter account that waits for people to tweet at it. Then the bot takes the tweet, runs it through an emulator of the classic 1980s BBC Microcomputer running Basic, and tweets back an animated gif of three seconds of the emulator’s output. It might sound like that couldn’t amount to much, but folks have been using it to demonstrate some amazing feats of programming, most notably including Ebon Upton, creator of the Raspberry Pi.

“The Bot’s [output] posts received over 10 million impressions in the first few weeks, and it’s running around 1000 Basic programs per week,” said the account’s creator, Dominic Pajak, in an email interview with IEEE Spectrum.

Upton, for example, performed a coding tour de force with an implementation of Conway’s Game of Life, complete with a so-called Gosper Gun, all running fast enough to see the Gun spit out glider patterns in real time. (For those not familiar with Conway’s Game of Life, it’s a set of simple rules for cellular automata that exist on a flat grid. Cells are turned on and off based on the state of neighboring cells according to those rules. Particularly interesting patterns that emerge from the rules have been given all sorts of names.)

Upton did this by writing 150 bytes of data and machine code for the BBC Microcomputer’s original CPU, the 6502, which the emulator behind the BBC Micro Bot is comprehensive enough to handle. He then converted this binary data into tweetable text using Base64 encoding, and wrapped that data with a small Basic program that decoded it and launched the machine code. Since then, people have been using even more elaborate encoding schemes to pack even more in. 

Pajak, who is the vice-president of business development for Arduino, created the BBC Micro Bot because he is a self-described fan of computer history and Twitter. “It struck me that I could combine the two.” He chose the BBC Micro as his target because “growing up in the United Kingdom during the 1980s, I learnt to program the BBC Micro at school. I certainly owe my career to that early start.” Pajak adds that, “There are technical reasons too. BBC Basic was largely developed by Sophie Wilson (who went on to create the Arm architecture) and was by far the best Basic implementation of the era, with some very nice features allowing for ‘minified’ code.”

Pajak explains that the bot is written in Javascript for the Node.js runtime environment, and acts as a front end to Matt Godbolt’s JSbeeb emulation of the BBC Micro. When the bot spots a tweet intended for it, “it does a little bit of filtering and then injects the text into the emulated BBC Micro’s keyboard buffer. The bot uses ffmpeg to create a 3-second video after 30 seconds of emulation time.” Originally the bot was running on a Raspberry Pi 4, but he’s since moved it to Amazon Web Services.

Pajak has been pleasantly surprised by the response: “The fact that BBC BASIC is being used for the first time by people across the world via Twitter is definitely a surprise, and its great to see people discovering and having fun with it. There were quite a lot of slogans and memes being generated by users in Latin America recently. This I didn’t foresee for sure.”

The level of sophistication of the programs has risen sharply, from simple Basic programs through Upton’s Game of Life implementation and beyond. “The barriers keep getting pushed. Every now and then I have to do a double take: Can this really be done with 280 characters of code?Pajak points to Katie Anderson’s tongue-in-cheek encoding of the Windows 3.1 logo, and the replication of a classic bouncing ball demo by Paul Malin—game giant Activision’s technical director—which, Pajak says, uses “a special encoding to squeeze 361 ASCII characters of code into a 280 Unicode character tweet.”

If you’re interested in trying to write a program for the Bot, there are a host of books and other coding advice about the BBC Micro available online, with Pajak hosting a starting set of pointers and a text encoder on his website, www.dompajak.com.

As for the future and other computers, Pajak says he given some tips to people who want to build similar bots for the Apple II and Commodore computers. For himself, he’s contemplating finding a way to execute the tweets on a physical BBC Micro, saying “I already have my BBC Micro connected to the Internet using an Arduino MKR1010…”

China Launches National Blockchain Network in 100 Cities

Post Syndicated from Nick Stockton original https://spectrum.ieee.org/computing/software/china-launches-national-blockchain-network-100-cities

Next month, an alliance of Chinese government groups, banks, and technology companies will publicly launch the Blockchain-based Service Network (BSN). It will be among the first blockchain networks to be built and maintained by a central government. Think of it like an operating system, where participants can use existing blockchain programs, or build their own bespoke tools, without having to design a framework from the ground up.

The BSN’s proponents say it will reduce the costs of doing blockchain-based business by 80 percent. By the end of 2020, they hope to have nodes in 200 Chinese cities. Eventually, they believe it could become a global standard.

China leads the world in blockchain-related patents, according to the World Intellectual Property Organization. And blockchain goes far beyond Bitcoin; the technology can be used to verify all sorts of transactions.

For instance, JD.com—one of China’s largest online storesuses blockchain tech to verify its supply chain to customers and business partners who had worried the retailer was selling knock-off versions of luxury brands. The company recently made its platform open source. China’s General Administration of Customs uses blockchain to monitor 26 international border crossings.

And, though China has effectively banned cryptocurrencies like Bitcoin, digital payments are wildly popular. “Most people prefer to use WeChat or Alipay,” says Hong Wan, a blockchain expert from North Carolina State University. She says the government may want BSN to become central to a digital currency and payment system that would rival those services.

The biggest roadblock for blockchain technology has been that setting up a platform is expensive and difficult, says Yang Xiang, the dean of the Digital Research Innovation Capability Platform at Swinburne University in Australia. “When we look back at the development of blockchain technology, the emergence of BSN or similar solutions is inevitable,” he says.

According to a white paper [PDF] published by the BSN’s founding members—which include the Chinese National Information Center, China UnionPay, China Mobile, and payroll services company Red Date—most companies can expect to spend at least US $14,000 to build, operate, and maintain a blockchain platform for one year.

The BSN will let programmers develop blockchain applications without requiring them to do so much heavy lifting. The white paper estimates it will cost businesses, on average, less than $300 to deploy an application on BSN.

Unlike Bitcoin and other so-called permissionless blockchains, where anybody can join and review the entire transaction record, applications running on the BSN will have closed membership by default. This ‘permissioned’ setup is much more amenable to businesses, which typically want to share transaction data only with trusted partners. Permissioned networks are also easier to scale, because all verifications happen in-house.

The BSN’s founders announced the platform on 15 October 2019—about a week before Chinese President Xi Jinping declared blockchain a national tech priority. Since then, individual developers and enterprise-scale engineering groups have been building and beta testing the platform. By launch time, the BSN Development Alliance says it hopes to have 100 city nodes running the platform, each with thousands of users.

The plan isn’t without its critics, though. North Carolina State’s Wan, for one, fears that the platform will experience performance lag due to verifying so many diffuse transactions. She says the BSN hasn’t yet released detailed technical specifications on the platform, so she doesn’t know how its creators will overcome this problem. “I think we all have skepticisms about what is going on in the tech,” says Wan. “It is still in the testing phase.”

The BSN Alliance hopes the platform will someday become the global standard for blockchain operations. But, China’s international partners may hesitate to join due to privacy concerns: The Chinese government will hold the BSN’s root key, which would allow it to monitor all transactions made using the platform.

China isn’t the only country betting on blockchain. In 2016, Australia spearheaded an effort to create global blockchain standards through the International Organization for Standardization. The European Union has been trudging toward a blockchain platform for years. And IBM, Facebook, and other tech companies have already launched their own versions.

Jiangshan Yu, associate director of the Blockchain Technology Centre at Monash University in Australia, isn’t concerned. He takes the long view: “What I see happening with the global blockchain infrastructure is there will be many national, local, or business platforms that will all eventually come together.”

This article appears in the April 2020 print issue as “China Takes Blockchain National.”

Programming Without Code: The Rise of No-Code Software Development

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/programming-without-code-no-code-software-development

Code is the backbone of most software programs and applications. Each line of code serves as an instruction—a logical, step-by-step mechanism for computers, servers, and other machines to perform an action. To create those instructions, one must know how to write code—a valuable skill that’s sometimes in short supply

But what if you could build software without writing a single line of code? That’s the premise behind no-code development, a software development method that has been gathering momentum. With the help of no-code platforms, it’s possible to develop software without writing any underlying code.

“No-code allows people who don’t know how to write code to develop the same applications that a software engineer would,” says Vlad Magdalin, co-founder and CEO of Webflow, a no-code platform for building websites. “It’s the ability to do without code what has traditionally been done with code.”

Do You Have the Right Complexion for Facial Recognition?

Post Syndicated from Willie D. Jones original https://spectrum.ieee.org/tech-talk/computing/software/do-you-have-the-right-complexion-for-facial-recognition

Back in my days as an undergraduate student, campus police relied on their “judgement” to decide who might pose a threat to the campus community. The fact that they would regularly pass by white students, under the presumption that they belonged there, in order to interrogate one of the few black students on campus was a strong indicator that officers’ judgement—individual and collective—was based on flawed application of a limited data set. Worse, it was an issue that never seemed to respond to the “officer training” that was promised in the wake of such incidents.

Nearly 30 years hence, some colleges are looking to avoid accusations of prejudice by letting artificial intelligence exercise its judgement about who belongs on their campuses. But facial recognition systems offer no escape from bias. Why? Like campus police, their results are too often based on flawed application of a limited data set.

Ambitious Data Project Aims to Organize the World’s Geoscientific Records

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/computing/software/ambitious-data-project-aims-to-organize-the-worlds-geoscientific-records

Geoscience researchers are excited by a new big-data effort to connect millions of hard-won scientific records in databases around the world. When complete, the network will be a virtual portal into the ancient history of the planet.

The project is called Deep-time Digital Earth, and one of its leaders, Nanjing-based paleontologist Fan Junxuan, says it unites hundreds of researchers—geochemists, geologists, mineralogists, paleontologists—in an ambitious plan to link potentially hundreds of databases.

The Chinese government has lined up US $75 million for a planned complex near Shanghai that will house dedicated programming teams and academics supporting the project, and a supercomputer for related research. More support will come from other institutions and companies, with Fan estimating total costs to create the network at about $90 million.

Right now, a handful of independent databases with more than a million records each serve the geosciences. But there are hundreds more out there holding data related to Earth’s history. These smaller collections were built with assorted software and documentation formats. They’re kept on local hard drives or institutional servers, some decades old, and converted from one format into another as time, funding, and interest allow. The data might be in different languages and is often guided by informal or variably defined concepts. There is no standard for arranging the hundreds of tables or thousands of fields. This archipelago of information is potentially very useful but hard to access.

Fan saw an opportunity while building a database comprising the Chinese geological literature. Once it was complete, he and his colleagues were able to use parallel computing programs to examine data on 11,000 marine fossil species in 3,000 geological sections. The results dated patterns of paleobiodiversity—the appearance, flowering, and extinction of whole species—at a temporal resolution of 26,000 years. In geologic time, that’s pretty accurate.

The Deep-time project planners want to build a decentralized system that would bring these large and small data sources together. The main technical challenge is not to aggregate petabytes of data on centralized servers but rather to script strings of code. These strings would work through a programming interface to link individual databases so that any user could extract information through that interface.

Harmonizing these data fields requires human beings to talk to one another. Fan and his colleagues hope to kick off those discussions in New Delhi, which in March is hosting a big gathering of geoscientists. A linked network could be a gold mine for researchers scouring geologic data for clues.

In a 19th-century building behind Berlin’s Museum für Naturkunde, micropaleontology curator David Lazarus and paleobiologist postdoc Johan Renaudie run the group’s ­Neptune database, which is likely to be linked with Deep-time Digital Earth as it develops. Neptune holds a wealth of data on core samples from the world’s ocean floors. Lazarus started the database in the late 1980s, before the current SQL language standard was readily available—at that time it was mostly found only on mainframes. Renaudie explains that Neptune has been modified from its incarnation as a relational database using 4th Dimension for Mac, and has been carefully patched over the years.

There are many such patched-up archives in the field, and some researchers start, develop, and care for data centers that drift into oblivion when funding runs out. “We call them whale fall,” Lazarus says, referring to dead whales that sink to the ocean floor.

Creating a database network could keep this information alive longer and distribute it further. It could lead to new kinds of queries, says Mike ­Benton, a vertebrate paleontologist in Bristol, England, making it possible to combine independent data sources with iterative algorithms that run through millions or billions of equations. Doing this can deliver more precise time resolutions, which hitherto has been really difficult. “If you want to analyze the dynamics of ancient geography and climate and its influence on life, you need a high-resolution geological timeline,” Fan says. “Right now this analysis is not available.”

This article appears in the March 2020 print issue as “Data Project Aims to Organize Scientific Records.”

Google v. Oracle Explained: The Fight for Interoperable Software

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/google-v-oracle-explained-supreme-court-news-apis-software

Application programming interfaces (APIs) are the building blocks of software interoperability. APIs provide the specifications for different software programs to communicate and interact with each other. For instance, when a travel aggregator website sends a request for flight information to an airline’s API, the API would send flight details back for the website to display.

Keeping APIs open, meaning they’re publicly listed and available or shared through a partnership, enables developers to freely build applications that work together. That practice is the basis of how software works today. But a decade-long fight between Google and Oracle over API copyright and fair use could upend the status quo.

What North Korea Really Wants From Its Blockchain Conference

Post Syndicated from Morgen Peck original https://spectrum.ieee.org/tech-talk/computing/software/north-korea-blockchain-conference

A blockchain conference slated to take place next week in Pyongyang, North Korea, now seems unlikely to go forward as law enforcement agencies in the United States and regulators at the United Nations send a clear message that the transfer of cryptocurrency and blockchain expertise to the DPRK will not go unpunished. The uncertainty surrounding the event comes as fallout from a similar conference in 2019 continues to spread into the new year.

The 2019 conference, which took place last April resulted in the arrest of Virgil Griffith, a U.S. citizen and Ethereum developer who gave a presentation on blockchain technology in Pyongyang after receiving warnings from the FBI not to do so. The Southern District of New York charged Griffith in early January with one count of conspiracy to violate the International Emergency Economic Powers Act. He is now on bail awaiting trial. 

Plans for a 2020 repeat of the conference drew a quick response from the United Nations, which flagged the event as a likely sanctions violation in a confidential report, according to Reuters. The website for the conference has since been taken down. Organizers did not respond to emails asking about the status of the event. However, one of the organizers listed on the website of the 2019 conference, Chris Emms of Coinstreet Partners, responded on the messaging app, Telegram to say he is no longer involved. “I am not involved nor am I organising it whatsoever [sic],” wrote Emms.

With the fate of the event in doubt, experts are now debating whether it would have indeed complicated international efforts to restrict North Korea’s ability to finance its nuclear program. Over the last three years, the regime has proven itself highly proficient at implementing cryptocurrencies, both for criminal and non-criminal activities. Some analysts argue there’s little a developer like Griffiths could teach officials in the North Korean regime about money laundering and sanctions evasion that they don’t already know. 

“I don’t think he was sharing any shocking insights,” says Kayla Izenman, a research analyst at the Royal United Services Institute in London. “It’s pretty obvious that North Korea knows what they’re doing with cryptocurrency.”

According to Izenman’s own research at RUSI’s Centre for Financial Crime and Security Studies, North Korea has successfully used cryptocurrencies as a revenue stream and money laundering tool since at least 2017. In May of that year, North Korea-affiliated hackers deployed the Wannacry ransomware attack that first hit hospitals in the United Kingdom, but went on to circle the globe within five days. The Wannacry worm took computer hard drives hostage but offered victims the chance to recover data in return for bitcoins.

North Korea has sought out other cryptocurrency-related revenue streams as well. One of the most lucrative has been a series of hacking attacks carried out against online exchanges that often hold large sums of cryptocurrency. Izenman’s research indicates that the regime has been especially successful preying on low-security exchanges in South Korea. 

“They’ve actually been wildly successful in what they’ve done and it’s been, I would say, relatively low effort,” says Izenman.

Research indicates that the North Korean regime has also been mining cryptocurrency, either to use in illicit transactions today or to hoard for future use. A report released last week by the cybersecurity firm, Recorded Future, found that North Korea has increased its mining of the cryptocurrency Monero by tenfold since 2018.

Monero is a privacy coin that obscures the identity of users, making it difficult, if not impossible, to track transactions. After Wannacry, hackers exchanged the bitcoin proceeds from that attack into Monero, at which point, investigators lost track of the funds. 

“Following the money is the absolutely key to placing any leverage on the Kim regime,” says Priscilla Moriuchi, an author on the Recorded Future report and a senior fellow at Harvard’s Belfer Center for Science and International Affairs. “Where crypto enters and what exits that chain is absolutely critical.”

As the North Korean regime finds ways to bring in cryptocurrency, it also needs ways to cash out. To do so, it relies on regional cryptocurrency exchanges that operate below the radar. According to Izenman, there are plenty to choose from.

“It’s just a huge weak spot,” says Izenman. “In some places exchanges don’t have to do comprehensive due diligence because there’s no government regulation. In some places they don’t have to do it because the existing government regulation isn’t enforced. And some exchanges just aren’t compliant with regulation. There are so many gaps in the whole system.”

But Moriuchi stresses that there is a broader issue at play. “It’s not just cryptocurrency that has changed the game. It’s the entire weaponization of the internet,” says Moriuchi. “The things that the North Korean state are doing, engaging in the blockchain development, mining cryptocurrency, doing IT work, ripping off gamers, robbing banks. All of these are things that other countries are starting to emulate.”

Why then would a country that is itself host to some of the most expert cyber-criminals in the world need to host a conference about blockchain technology? Izenman suggests the event may serve more as a propaganda tool than technology transfer.

“What they want is the attention from having the conference and being able to fly in Americans and say, “look we have a guy from Ethereum talking to us about crypto and how we can evade sanctions,’” says Izenman. “So I kind of get why, as a U.S. entity, you would be wanting to stop that idea from spreading.”

Algorithm Groups People More Fairly to Reduce AI Bias

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/algorithm-groups-people-more-fairly-reduce-ai-bias

Say you work at a big company and you’re hiring for a new position. You receive thousands of resumes. To make a first pass, you may turn to artificial intelligence. A common AI task is called clustering, in which an algorithm sorts through a set of items (or people) and groups them into similar clusters.

In the hiring scenario, you might create clusters based on skills and experience and then hire from the top crop. But algorithms can be unfair. Even if you instruct them to ignore factors like gender and ethnicity, these attributes often correlate with factors you do count, leading to clusters that don’t represent the larger pool’s demographics. As a result, you could end up hiring only white men.

In recent years, computer scientists have constructed fair clustering algorithms to counteract such biases, and a new one offers several advantages over those that came before. It could improve fair clustering, whether the clusters contain job candidates, customers, sick patients, or potential criminals.

Toshiba’s Optimization Algorithm Sets Speed Record for Solving Combinatorial Problems

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/toshiba--optimization-algorithm-speed-record-combinatorial-problems

Toshiba has come up with a new way of solving combinatorial optimization problems. A classic example of such problems is the traveling salesman dilemma, in which a salesman must find the shortest route between many cities.

Such problems are found aplenty in science, engineering, and business. For instance, how should a utility select the optimal route for electric transmission lines, considering construction costs, safety, time, and the impact on people and the environment? Even the brute force of supercomputers is impractical when new variables increase the complexity of a question exponentially. 

But it turns out that many of these problems can be mapped to ground-state searches made by Ising machines. These specialized computers use mathematical models to describe the up-and-down spins of magnetic materials interacting with each other. Those spins can be used to represent a combinatorial problem. The optimal solution, then, becomes the equivalent of finding the ground state of the model.

Battle of the Video Codecs: Coding-Efficient VVC vs. Royalty-Free AV1

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/battle-video-codecs-hevc-coding-efficiency-vvc-royalty-free-av1

Video is taking over the world. It’s projected to account for 82 percent of Internet traffic by 2022. And what started as an analog electronic medium for moving visuals has transformed into a digital format viewed on social media platforms, video sharing websites, and streaming services.

As video evolves, so too does the video encoding process, which applies compression algorithms to raw video so the files take up less space, making them easier to transmit and reducing the bandwidth required. Part of this evolution involves developing new codecs—encoders to compress videos plus decoders to decompress them for playback—to support higher resolutions, modern formats, and new applications such as 360-degree videos and virtual reality.

Today’s dominant standard, HEVC (High Efficiency Video Coding), was finalized in 2013 as a joint effort between the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG). HEVC was designed to have better coding efficiency over the existing Advanced Video Coding (AVC) standard, with tests showing an average of 53 percent lower bit rate than AVC while still achieving the same subjective video quality. (Fun fact: HEVC was recognized with an Engineering Emmy Award in 2017 for enabling “efficient delivery in Ultra High Definition (UHD) content over multiple distribution channels,” while AVC garnered the same award in 2008.)

HEVC may be the incumbent, but two emerging options—VVC and AV1—could upend it.